Hacker News new | past | comments | ask | show | jobs | submit login
Towards Scala 3 (scala-lang.org)
490 points by kushti 11 months ago | hide | past | web | favorite | 376 comments



It's amazing how many people in this thread justify their own language choices by making negative, sweeping statements about another language (Scala in this case) that is successfully used by people other than themselves.

Yes, some people who previously used Scala, now use Kotlin. And some people who would've used Scala if Kotlin didn't exist, use Kotlin. Same probably goes for Rust.

But there is a big enough market for people that like the intricate and expressive type system that Scala gives you, in combination with the JVM ecosystem. People that think Kotlin is nice, but not expressive enough, for example. People who don't want to deal with Rust's memory management and/or don't have a use for that. People who think Go's simplicity is sometimes more of a burden.

Why are people so intent to bash sombody else's language choices?! Go, Rust, Kotlin, Scala are all great languages in different ways. They all cater to different needs, sometimes radically different (Go vs Scala), sometimes subtly different (Kotlin vs Scala). I think there is a market for all of them, and more. And the introduction of a new language (Kotlin for example) does not necessarily spell doom for another (Scala).

Let's all enjoy our own tastes and needs, and respect those of others.


The reality is that a language lives or dies by its ecosystem - particularly when it comes to a language like Scala that's in a tightly symbiotic relationship with its IDEs (the next time someone tries to sell you a "visual programming language", look at Scala for a language that makes really effective use of the GUI for programming without compromising the things that make textual programming languages good - see e.g. https://blog.jetbrains.com/scala/2018/03/27/intellij-scala-p... ). Only a few big players can afford the kind of investment it takes to make something like that. So much as I wish it were otherwise, I can't just sit on my island and use Scala because I think it's the best - if the language is to live, I have to convince other people it's the best. I don't begrudge other people feeling the same way about the languages they do like (provided that doesn't fall into dishonesty, as some of the claims from Kotlin advocates about e.g. null and Scala have).


As a fellow emigrant of `Scala is the best island` forced off for pragmatic reasons.

I've been around scala long enough to see the rise and fall of multiple expeditions into the bowels of the OSGI eclipse cave of horrors (Sean McDirmid / Miles Sabin etc). Before switching horses and settling in Intellij for several years. So I understand where you are coming from.

I've since jumped ship again to vscode (along with typescript), and in my humble experience / opinion, vscode & it's language server protocol (LSP), alleviates quite a bit of this. It abstracts away a lot of common operations from a language to the languages compiler, while providing a consistent front-end. I think if scala eagerly adopts supporting this it would be a good thing. Having a language server protocol applies back-pressure in a certain sense on language features, if you are going to add them, then the tooling (and LSP) needs to support them.


Not sure if you read it in the article, but Scala 3 will have LSP support (if it doesn't already).


It already has support although it is not complete: http://dotty.epfl.ch/docs/usage/ide-support.html


Could you expand on what this all is? I'm not familiar with what benefits an LSP provides. What is the world like before and after LSP support?


It moves code completion and refactoring into a dedicated process behind the LSP API, which is maintained by the language devs. As a consequence the IDE automatically keeps up with language changes.


I understand your strategic motivation, but to convince people your language is best, saying "my language is better than X" can be a very bad tactic. The reason is that most programmers aren't big PL fans, and that, to be honest, language choice (among various more-or-less suitable options) has a negligible -- if any -- effect on a business's bottom line. The biggest effect is on programmer enjoyment, as there are (perhaps unfortunately) many programmers who feel that their programming language does affect their satisfaction from programming. Such a preference for a language is often largely personal, but you can make your language more or less attractive to certain crowds. Creating a combative, competitive atmosphere, often results in a very fervent community, but usually not a large one, as such an atmosphere doesn't have a broad appeal. To increase the appeal of a language, the best things you can do is create great tooling, lots of useful libraries, and a friendly community.

In addition, an argument for "my language is better than X" can usually be easily countered with an argument for the opposite, creating a net result that doesn't help your goals. Debates are sometimes interesting and informative, but they're not effective marketing.


> language choice (among various more-or-less suitable options) has a negligible -- if any -- effect on a business's bottom line.

That's just, like, your opinion, man.

(By which I mean: I disagree, and we both know there's no clear evidence one way or another)

> To increase the appeal of a language, the best things you can do is create great tooling, lots of useful libraries, and a friendly community.

I do what I can, but a bad language can do those just as easily as a good language - tooling and libraries are much more a function of big-corp backing than they are of good language design. It's remarkable how much Scala has managed to achieve in those areas without having a big name behind it, but there's no competing with the amount of programmer-hours the likes of e.g. Google can pour in. The only way Scala can hope to win is on actual language design merit (and maybe winning popularity on that is impossible, but given how well Scala has managed to do so far, I remain hopeful).

> In addition, an argument for "my language is better than X" can usually be easily countered with an argument for the opposite, creating a net result that doesn't help your goals. Debates are sometimes interesting and informative, but they're not effective marketing.

A genuinely better language should have at least a slightly better chance of winning a debate over which language is better. If we don't believe that then we have no hope of ever learning truths, and may as well pick languages to use at random (or I guess go with whatever Google picked).


> I disagree, and we both know there's no clear evidence one way or another

I think that the lack of evidence in favor of a strong effect is normally counted in favor of the null hypothesis. It may not feel fair, but the burden of proof is solely on those who claim a strong effect exists, not on those who don't.

> but a bad language can do those just as easily as a good language

Again, the burden of proof is solely on those who claim there is such a thing as a good language with a strong effect. If you can't display an effect, or claim that others can also induce it by other means is a win for the null hypothesis. If you claim there is such a thing as a "bad" language, and there is no evidence this is true, doing so will harm your cause.

> A genuinely better language should have at least a slightly better chance of winning a debate over which language is better.

I just don't think that a debate is the right way to establish an empirical claim. It's ok to engage in a debate - even a heated one - but the lack of evidence should at least encourage humility. My point is just that an imagined "win" at a debate only harms your mission. If it convinces anyone, which is highly doubtful, it's probably not the people you want, anyway. An approach that says, "I think this is cool. I like it and it helps me and may help you, too" is so much more effective at marketing than "my way is best" especially if the evidence is not on your side.


> if the language is to live, I have to convince other people it's the best

Aren't JVM languages interoperable - because they use the same classfile/byte code format?

What does it matter if the jar is written in java or scala, as long as it is possible to use the external interface from any of these languages?


Yes and no.

JVM languages can usually use other JVM libraries, but they often aren't idiomatic to that language. Scala can use JVM libraries, but it often feels wrong or cumbersome, e.g. Java is full of mutable builder classes, which I've never once encountered in "native" Scala.

The other way around, in Scala you have to be careful if you want your library to be usable from other JVM languages. There's certain Scala features you just cannot use.

This is a problem that Kotlin actively markets itself with, they promise 100% Java compatibility all the time, going as far as encouriging to mix Kotlin and Java code in a single project.


You still can get into trouble with Kotlin, hence why they have an interoperability guide in Android.

https://android.github.io/kotlin-guides/interop.html


What Scala features can't you use?


Real example from my experience: you use a Scala Map and pass it to something like FreeMarker, expecting that it will work like Java Map there. It will not and depending on your test coverage and SQA capabilities, you may notice it very quickly or very late (you'll get no compile error, since template engines usually accept generic objects and then analyze their type via reflection).


Why would you expect this to work? In fact given that API a map from another java library(say guava) would have crashed you just as well


Well, I wouldn't even try to have a source code written in two different languages in the same component. But I've seen this problem in someone else's real code, which was probably caused by a not really well-thought migration of Java code to Scala. Basically, a developer replaced one Map with another, fixed all the places where API was different on source code level and a number of tests. However, interoperability issues like this one may be hard to catch in general (even with good test coverage and regular code reviews) and they show the complexity of the problem, that requires certain level of development skills and discipline to do it right.


You are right, this particular example is more of case against reflection rather than anything else. Having said that, Scala maps actually are tricky to use from java. In general scala does a better job in compatibility when it comes to reuse of java code from scala, but not the other way around


implicits mainly, these are objects looked up by the compiler for various purposes e.g. Typeclasses, or how to serialize a specific object to JSON.

The Scala 2.8-2.12 collection library was impossible to call from Java, one of the reasons for an upcoming redesign in 2.13.


The bytecode is the same but not the standard library of classes. Scala uses different collection classes, Option instead of Optional, companion objects instead of static classes, etc...

In my experience mixing scala and java in a single codebase is inelegant, with plenty of conversion code at the boundaries.


No single person can maintain an entire ecosystem around a language. And programming languages have strong network effects. If I think a language is dying then I won’t want to use it because I’ll worry about future support. It becomes a self fulfilling prophecy and a negative feedback loop.


> Aren't JVM languages interoperable - because they use the same classfile/byte code format?

Up to a point - libraries from another paradigm tend not to be idiomatic, but I could live with that. I'm more worried about tooling availability - I talked about IDE support in particular, and it's also things like profilers and monitoring/instrumentation.


I'm not sure I see anything in that post that's tied to scala in particular, or even really visual programming. Are you referring to the hints? The parameter names hints seem to be result of having functions in general, and type hinting look like a direct result of static typing; I like the idea, but it doesn't appear to me to be tied to Scala the language at all.

It looks like its just tied to the jetbrains ide


The macro expansion is more tightly tied to Scala itself, as are things like showing implicit parameters. To a certain extent the things you mention are generic things, but Scala pushes type inference further than other languages that that IDE supports and makes more use of static types in general, so the IDE might well not have bothered with the type hints without Scala. Likewise Scala makes fuller use of named parameters than many languages (in Java parameters have names but you can't pass them with name=value syntax).


> Why are people so intent to bash sombody else's language choices?!

Partially it's simple tribalism. But part of it is also the rational awareness of the opportunity cost of investing in a language other than the author's preferred one. The more people using language X that I don't like, the fewer people using my preferred language Y. That means fewer libraries I can use, docs I can read, bugs that get fixed, etc.

Language ecosystems aren't entirely zero-sum, but they aren't totally orthogonal either.


That might be true to some extend, but many languages have completely different target audiences. People whose favourite language is Go, will probably not move to Scala (and vice versa). Kotlin vs Scala is an easier to understand competition.

In any case, if people want other people to invest in "their language", they should focus on making that language and its ecosystem compelling to use, not bash other languages...


> People whose favourite language is Go, will probably not move to Scala (and vice versa). Kotlin vs Scala is an easier to understand competition.

That said, in terms of language features and type system, Kotlin is arguably closer to Go than it is to Scala. Yes, Kotlin competes with Scala on the JVM, but it's a very different language. Scala is much closer to a language such as OCaml than it is to Kotlin.


I have seen 2 types of bashers. One which seems typical line of business app developers. They disparage other languages, praise theirs on basic things like IDEs, libs etc. I don't mind these much.

However others who approach from position of authority like compiler hackers, language authors themselves, or very senior developers etc. Ideally criticism from them should be more valid but more often than not I have seen they keep making bad faith arguments and justify their hate by precise technical arguments so they can't be challenged by non-technical arguments. It makes vary of their arguments even on topics other than favorite programing language.


>That might be true to some extend, but many languages have completely different target audiences

Thats somewhat by happenstance though isn't it; you take go, build a good enough scientific computing library, and tease out good enough performance, and get enough coworkers on it, and you'll probably have Go advertising itself as a scientific computing language (when talking to the relevant people).

And then you'll probably have go language devs implementing features better targetting the scientific computing community

And as the new people feed in, and implementing their own needs and libraries, suddenly go becomes good at ML...

And eventually we reach an 80s C-like status, advertised for nearly everything, because of its heavyweight ecosystem.

In terms of ecosystem, its really not constant for what they're good at. What might be constant is the flavour of programming the language prefers; you're probably not getting rid of goroutines as a significant language feature no matter how big Go gets.


Agreed.

Here's my view on languages: They have strengths and weaknesses. They cater to different needs, as you say. If you have the need for what Rust, say, does, and Rust solves some real problems for you and makes your job a lot easier, then it's kind of natural that you think Rust is wonderful. In fact, what you found is that Rust is wonderful for that problem, not that it's wonderful in general. But it's real easy to think that your situation is more universal than it is, and therefore that Rust (in this example) is this wonderful language that makes all of programming so much better.

Once you've fallen into that flawed perspective, then it becomes easy to criticize other languages. Why would you ever want to use Scala? It doesn't have Rust's advantages. But people miss that, if you have a Scala problem rather than a Rust problem, and you pick Rust anyway, it's not going to go well...


Part of the issue with alternative JVM languages is that there isn't a good standard way to mix them within a single application. There are various compatibility layers for calling foreign functions but nothing baked into the lower-level platform. So that forces a degree of competition and mutual exclusion.

Whereas with Microsoft's .NET CLR there's a standard calling convention supported by all the languages. So it's easy to mix and match C#, F#, VB.NET, etc within a single application or reuse libraries. So developers have more freedom to pick the best language for each problem domain.

(It looks like Hacker News filters out the Unicode sharp symbol. Why?)


HN throws away all Unicode symbols.


Divide and conquer while we rule above the average, enjoying our parenthesis and the riches they bring us.

;)


No! Clojure is da best! jk...


So many cool things already done and even more to come. Some parts that excite me as a Scala nerd:

* One can now use implicit function types to basically build your own table language syntax that is type-safe. [1]

* Multiversal Equality: you get to decide whether it makes any sense to compare an Apple and an Orange using "==" or "!=", as opposed to Java's forced requirement of allowing you to compare anything with anything. [2]

* Null safety checks! Quoting from [3], "Adding a null value to every type has been called a "Billion Dollar Mistake" by its inventor, Tony Hoare. With the introduction of union types, we can now do better. A type like String will not carry the null value. To express that a value can be null, one will use the union type String | Null instead."

* First-class enums, finally. [4]

* Erased parameters: you can declare variables specifically for type-safety that _don't exist_ during run-time, improving efficiency. [5]

[1] http://dotty.epfl.ch/docs/reference/implicit-function-types....

[2] http://dotty.epfl.ch/docs/reference/multiversal-equality.htm...

[3] http://dotty.epfl.ch/docs/reference/overview.html

[4] http://dotty.epfl.ch/docs/reference/enums/enums.html

[5] http://dotty.epfl.ch/docs/reference/erased-terms.html


> * Multiversal Equality: you get to decide whether it makes any sense to compare an Apple and an Orange using "==" or "!=", as opposed to Java's forced requirement of allowing you to compare anything with anything. [2]

Unfortunately it's being introduced in a fail-unsafe way that as far as I can see makes it virtually useless. If I see "x == y" in code, I have no way to be confident that this isn't an old-fashioned universal comparison without going into the details of x and y, so I'm no better off than I was without this feature.


It looks like you can force strict equality everywhere with the flag '-language:strictEquality'. See the dotty documentation for more details: http://dotty.epfl.ch/docs/reference/multiversal-equality.htm...

EDIT just realized you are m50d on reddit and probably saw the exact same comment I got that from.


Yea, I'd love a compiler switch that enables the strict mode globally, while adding the leniency on a case-by-case basis instead.


You can always switch to usage of Eq and its === operator from Cats or Scalaz.


Implicits are a massive pain in the ass.


...ships in 2020?

I'm impressed with what they're doing (and love the language), and it's hard work, and their speed is faster than, say, Java's dead-pace evolution in the 2000s.

But, poking around at TypeScript, I've been blown away with the MS/TS speed of development. ~2-3 month release cycles, with non-trivial changes to the language (mapped types, conditional types, etc.), that are themselves unique/novel type system features, not like Java finally getting around to copying the obvious/best-practice approach to case/data classes.

Granted, I'm sure the MS/TypeScript budget is huge comparatively...

Seems like that's the "best" (realistic) way for dev tooling to evolve lately: come from, or attach yourself to, a ~top-5 tech company that makes their money somewhere else and can bankroll developer tools/languages (e.g. MS with TS, FB with React/Flow, Google with a myriad of things e.g. Dart & AdWords, Go).

Bringing it back to Scala, seems like they were close to this (flirted with adoption/sponsorship from a few startups like FourSquare, etc.) but, thinking about it now, "software development in the large" is a huge concern for those companies (e.g. anyone with enough extra money to actually pay for language/tooling improvements), and that's never really been Scala's strong suit (compile times, painless compiler upgrades).


TypeScript has a comparatively easier job because the type system is unsound. They can add new type features without needing to be paranoid about a hole in the system or a usability pitfall because the tools don't rely on types for correctness or optimization, and because users can easily cast to "any" or otherwise work around an annoyance.

When you want the whole system to actually deeply rely on the static type system, it needs to be much more robust, and that takes a lot of effort.

I'm not trying to minimize what the TypeScript folks are doing — it's a really nice, well-designed language. But the complexity required to evolve an optionally-typed language is closer to that of a dynamically-typed one than a statically-typed one.


Good point; I've mused before that dynamic languages will always be a ~generation ahead of static languages, due to what you point out, that type systems are hard, and very few people start working on a new type system as their hobby project (vs. hacking out a new syntax with an interpreter).

But I hadn't appreciated the difference in difficulty between an unsound type system like TS and Scala's type system...

I guess I've taken the formalism that underlies Scala's type system (e.g. the academic research behind dotty) as a given/for granted, but you're likely right, that TS can move faster because it doesn't need/have that level of rigor.

Thanks!


> I've mused before that dynamic languages will always be a ~generation ahead of static language

Did you mean "behind"?

The trend is pretty clear: dynamically typed languages are disappearing and all the popular modern languages are statically typed with type inference.

Most dynamically typed languages are busy adding static safety through gradual or opt-in typing, but you don't see the other way around.

Javascript will probably be the last major dynamically typed language we ever had to use.


> dynamically typed languages are disappearing and all the popular modern languages are statically typed with type inference

It does seems that all the dynamic languages are learning what the systems people knew in the 70s: for the Enterprise, correctness counts.

A happy v1 is only 10% of the story... Getting to v225.5, warts and all, is just something else.


Is it actually unsound if you perform no casting? That seems to be a more interesting property, because then the additions they're making still have to mesh cohesively.


> Is it actually unsound if you perform no casting?

Yeah, it is. At the very least, I know that function parameters are bivariant, which isn't sound:

    function foo(callback: (obj: Object) => void) {
      callback("not a bool");
    }

    function main() {
      foo(function (bool: Boolean) {
        if (bool) window.console.log("!");
      });
    }
This compiles without a type error but ends up assigning a string to a parameter of type bool.


I'm a Flow user rather than TS (and Flow complains about that code out of the box), but you are not telling us all:

Since TS 2.6, if you enable all the optional "strict" mode checks than TypeScript does complain about that code.

I see no disadvantage in those checks, e.g. "strictFunctionTypes", being optional, since you can always make them mandatory for your own project(s).

To test it:

- Go to the TS Playground at https://www.typescriptlang.org/play/

- Enter the above code

- Select "Options" and enable at least "strictFunctionTypes"


Is that option documented somewhere? I imagine there are some tradeoffs with enabling it.


https://www.typescriptlang.org/docs/handbook/release-notes/t...

> TypeScript 2.6 introduces a new strict checking flag, --strictFunctionTypes.

> The --strictFunctionTypes switch is part of the --strict family of switches, meaning that it defaults to on in --strict mode.

> Under --strictFunctionTypes function type parameter positions are checked contravariantly instead of bivariantly.

> ...(etc.)


Contravariance then, right. Thanks! I haven't gotten into Typescript yet, and at this point it looks fairly large and a little daunting, so this is helpful.


And this project is super interesting: https://1c.wizawu.com/#/whatis Use TypeScript on JVM with both NPM and Maven dependencies


The major tradeoff, IMO, is that it is even toggle-able at all, let alone being a feature switch that is enabled by default.

If you enter any random TS project today, odds are it isn't using that switch, and enabling the switch will lead you down a month long rabbit-hole of fixing cascading type errors.


Not only that, but since it isn't required many libraries in the ecosystem won't be using it. Comparing my experiences with TypeScript and Flow vs. my experiences with Haskell, any optional type system is going to leave a lot of gaps if you don't have a pretty strict not-invented-here culture.


Tradeoffs? It might frustrate some people more accustomed to JS' loose type system.


Yes, look at how it deals with type param variance for example. It doesn't.


For what it's worth, this isn't just an evolution of the language, but a complete rewrite of the compiler. I appreciate that they're taking the time to do it right. They probably could've pushed new, non-trivial features out faster if they weren't doing a total rewrite.


Pity that in this rewrite, they were unable to speed up compilation speed. From the latest reports, dotty appears to compile at about the same speed (if not more slowly than) scalac.


Where did you get that from? I’ve seen performance increases of 2-3x mentioned on conference talks: eg here https://d-d.me/talks/scalaworld2015/ slide 50.


This is pure speculation on my part, but my guess would be that better, simpler foundations and a fresh start end up unlocking a lot of optimization opportunities in the long run. But it would seem foolish to invest too much in optimization while things are still in flux.


Scalac has evolved considerably, but it appears to be much harder to evolve. The Scalac team and contributors are backporting some of the Dotty features, but this is far from a trivial effort. Doing the same changes in Dotty is typically much easier in comparison.


This does nothing to change my opinion that total rewrites are almost always a horrible idea.


The Scala compiler has to be cleaned up. Maybe rewritten, maybe not. Dotty started as a fork of Scala, and now Odersky decided/announced that it'll be merged back eventually. (And it should conform to whatever the Scala language is/will be after the SIP processes for 3.0.)


I've never seen a system or project that's had a "big" change that wasn't always a total rewrite.

I guess it's always envisioned differently like "can we tackle this subsystem in isolation".


The old saying: "Its bigger and slower, but at least its more expensive!"

My experience says, a rewrite just has different bugs. Combined with delay and cost, its often not worthwhile.


> I've been blown away with the MS/TS speed of development

TypeScript broke my code for the past year on every new release.

---

Scala evolves a lot on each new version. There's a big difference between 2.10 and 2.12, the current version.

Dotty is a big change. That it ships in 2020 that's OK. Shipping it any sooner is risky.


It's not the next release. There is 2.13 and 2.14 to be released. Are you saying Typescript has major version release every 3 months?


Typescript does indeed do frequent releases, though they're not nearly as ambitious as Scala 3.0.

A Typescript release usually has 1-3 exciting new features, and a number of smaller bug fixes and improvements.


Correct, but collectively, between now and 2020 it's almost certain in aggregate, Typescript will have significantly more `features` than Scala 3.0, and they will have the benefit of being used and iterated upon for 2+ years.


Like many things, the utility of a language can actually decrease as you add more things on to it. Both Scala and Go take a principled approach of only accepting heavily vetted features.

The time it takes to develop a feature is very rarely the bottleneck.


There are developer releases through 2019 as well.

So it's not like it won't be used until 2020.


That's my negative point about Kotlin (sorry, offtopic here, but stil). It moved fast before release and now for a long time there were only very minor releases (well, coroutines were nice addition and that's all) despite the fact that there's a lot to improve in Kotlin.


There are a lot of comments pointing out pros and cons of Scala, comparing it to other languages using superficial proxies.

I'd like to share a different perspective based on my own work with OOP-centric and FP-centric languages.

If you take OOP to its logical conclusion, OOP is a slippery slope that eventually leads to Gang-Of-Four centric designs.

If you take FP to its logical conclusion, FP is a slippery slope that eventually leads to Monad-Transformer centric designs.

Both are equally valid ways to design complex applications, so it all comes down to individual taste.

Personally I have a taste for mathematical abstractions, so Scala is better suited for my way of thinking.

It's obvious that Scala was specifically designed to be a solid foundation on which one can implement the concepts of a little known branch of mathematics called the category theory, and almost every design choice in the language flows from there.

If we use this as the basis for comparison to other languages, it's very easy to understand that Scala has carved out a very powerful niche for itself and is here to stay.


> It's obvious that Scala was specifically designed to be a solid foundation on which one can implement the concepts of a little known branch of mathematics called the category theory, and almost every design choice in the language flows from there.

Nonsense. Scala was designed to make programming easier and safer than Java - XML literals and pattern matching certainly have nothing to do with category theory. Scala's for/yield has surprising behaviour when e.g. mixing lists and sets, precisely because it doesn't have a categorical grounding and was implemented in terms of what working programmers wanted to do with it rather than enforcing that you've formed a valid semigroupoid in the category of endofunctors.

Over time it's emerged that some categorical constructs provide useful ways of thinking about code and solving programming problems, but you're putting the cart before the horse if you think the language was designed for category theory.


> Scala's for/yield has surprising behaviour ... because it doesn't have a categorical grounding

Not sure what you're talking about, but for/yield is syntactic sugar for map/flatMap, just like the "do notation" is in Haskell. Of course it has theoretical grounding, because it wouldn't work without flatMap being the monadic bind.

> mixing lists and sets

Again, not entirely sure what you're talking about, but if true, it's entirely unrelated to for/yield.


I believe that he's referring to the fact that Scala's Set is not a monad, so it "shouldn't" have a .flatMap operation.

Scala has other containers that have a .flatMap operation that are not real monads, such as Future and Try.

That can lead to surprising behavior on for/yeld loops.

Disclaimer: I do work in Scala and enjoy the language.


I mean if you write something like:

    (for {
      x <- List(1, 1, 1)
      y <- Set(x)
      z <- List(y, y)
    } yield z).length
the answer will be rather surprising, because in Scala for/yield is not necessarily monadic.


It's only surprising if you ascribe monadic properties to that syntax versus understanding it as literal syntactic sugar and absolutely nothing else.


I would say: it is `Set` and `List` that are wrong in that example, not `for`. That said, `for` really does have some issues (enter https://github.com/oleg-py/better-monadic-for/blob/master/RE...)


> If you take FP to its logical conclusion, FP is a slippery slope that eventually leads to Monad-Transformer centric designs.

I don't understand what you mean by this, and I write production haskell for a living. We don't have teetering towers of transformers, and the best advice I've seen is often "put away the shiny tools and just use functions", https://lukepalmer.wordpress.com/2010/01/24/haskell-antipatt... . Similarly in http://www.parsonsmatt.org/2018/03/22/three_layer_haskell_ca... , which is like one real transformer layer and a way of claiming only the capabilities you need in your impure code. His "invert your mocks" article talks about this, too.

Ed Kmett's comments on Scala make me worry that the "solid foundation" isn't as solid as it could be: https://www.reddit.com/r/haskell/comments/1pjjy5/odersky_the... . How much of this is true in more recent Scala versions?


> Ed Kmett's comments on Scala make me worry that the "solid foundation" isn't as solid as it could be: https://www.reddit.com/r/haskell/comments/1pjjy5/odersky_the.... . How much of this is true in more recent Scala versions?

Some are fixed (Either, inference for many of the type lambda cases), some are being fixed in the next version (container overloads, CanBuildFrom), some are not really issues at all (free theorems are fine, an extra map on a monad is not a problem and sometimes more efficient, specific compiler bugs have been fixed but never said anything about the language in general, Haskell as used in the wild (with orphan instances) doesn't guarantee typeclass coherence either), some are real but exaggerated issues (subtyping and implicits, type inference for tricky recursion), a few are genuine issues that remain and probably always will (having to trampoline your monads, no kind system).


It's been a while since I lurked Haskell forums/haunts (was into it way back when), but at that time Lens was a big deal...if that's not a highfalutin library made for the most entrenched monad geeks (and for the purpose of making setters and getters, no less), I don't know what is. I guess I'm really shocked to hear that Haskell is all pragmatic and simple now with slim abstractions...but I'd love to be corrected.


I'd suggest that you're giving Lens a pretty short shrift. It's not "just" getters and setters as it works on immutable data. It also has a composition story that's better than normal getters and setters and _much_ better than other immutable data update stories.


I disagree with that. Haskell is category theory as a language. The Scala community got invaded by Haskellites trying to turn Scala into Haskell, but there’s a particular style of OOP/FP hybrid that requires a language like Scala to use/teach/explore. The problem is that in creating the tool to enable that, we ended up with a language that was too big to have a coherent style, which led to an incredibly fractured community.


> I disagree with that. Haskell is category theory as a language

No it is really not. Haskell only really has one category of any import, typically denoted 'Hask'. There is no builtin way to embed or express arbitrary categories in Haskell. It's not even clear what this would mean, given that Haskell only has one conception of arrow (the function type '->'). In fact, Haskell is so far removed from true category theory that we have to use a compiler plugin (CCC by Conal Elliot) to get anything approaching compilation to arbitrary categories.

Haskell is based on the polymorphically typed lambda calculus. Some decades after its design, some people realized they could use some abstract algebra concepts to help structure programs. You can use these concepts in any language with a sufficiently expressive type system. It is not specific to Haskell


Designing good multi-paradigm languages is very hard. I'm a big fan of Mozart-Oz and Common Lisp, and I think Odersky did a really good job with Scala. In fact, given that Common Lisp is languishing and Mozart was never a serious real world contender, Scala fills in a niche where there are not many competitors. C++, just for some use cases, Julia for others, and .NET.

The fact that many organizations working on massive datasets have embraced Scala proves there are not many alternatives around, sadly.


> Common Lisp is languishing

can you elaborate more on that?


I don't think CL is attracting a lot of new developers or there's a lot of new libraries getting developed. But I would be glad to be proven wrong. Perhaps Racket will eventually become a CL replacement, now it's adopting a lot of Chez low-level stuff and it's multiparadigm efforts keep growing.


Do you have any ideas why Racket has not had the commercial success of Clojure for example?


My guess would be the JVM. Racket is an academic language, designed for academia. At least it markets itself as such. Made to explore the design space of programming languages.

Clojure was designed and marketed for the enterprise. Builds on the JVM, full Java interop, emphasis on pragmatism.


   Haskell is category theory 
   as a language.
I disagree with this. Haskell's creation [1, 2] predates the realisation (by the FP community at large) of the close connection between parts of category theory and parts of pure functional programming, which happened in the 1990s, perhaps driven by Moggi's realisation that monads are a fundamental abstraction in computation that can reconcile effects with pure functional computation. More importantly, there is no category "Hask" of Haskell programs, and there isn't even a candidate category that's even close.

One of Odersky's motives in creating Scala was bringing the power of Haskell into the JVM world, although this is not the only motive (tight integration of OO and FP being another).

Haskell's key innovation over its predecessors (Miranda and ML) was the addition of higher-kinded types, which in turn made ad-hoc polymorphism digestable in a typed world (in the form of type-classes). The combination of HKTs and type-classes enables the monadic abstractions that contemporary Haskell programming is widely known for today.

Note that Scala has HKTs and type classes (via implicits), so Haskell programs can typically transliterated into idiomatic Scala without major complications.

[1] P. Hudak, J. Hughes, S. Peyton Jones, P. Wadler, A History of Haskell: Being Lazy With Class. http://haskell.cs.yale.edu/wp-content/uploads/2011/02/histor...

[2] Interview with Simon Peyton-Jones. http://www.cs.cmu.edu/~popl-interviews/peytonjones.html

[3] E. Moggi, Notions of Computation and Monads. https://core.ac.uk/download/pdf/21173011.pdf


> One of Odersky's motives in creating Scala was bringing the power of Haskell into the JVM world, although this is not the only motive (thigh integration of OO and FP being another).

I'm not sure why you would think that. Odersky has long claimed ML as his primary functional programming influence, not Haskell. It does have HKT, so there's that...but that's about it. Typeclasses aren't really a part of the language. They were made possible by implicits. They have since had some language level support (context bounds), but that is more of a nice to have than a primary influence.

Just because Scala can "do" Haskell doesn't mean odersky was inspired by it any more than ML or Java.


From an abstract PL theory POV, Haskell's key innovation over ML was HKTs. It was an experiment at the time, but one that was successful beyond all expectation: I cannot imagine designing a new PL without HKTs. (Note that Rust is also trying to add HKTs, but has been running into difficulties with type-based lifetime tracking IIRC.)

Implicits are a generalisation of default arguments (I don't know where they were pioneered, maybe C++). Implicits were first tried in Haskell [1]. Scala refined them in several steps, the last being [2] which represents implicits on the type leve. Mimicking type classes via implicits is a well-established Scala idiom [3].

[1] J. R. Lewis, J. Launchbury, E. Meijer, M. B. Shields, Implicit Parameters: Dynamic Scoping with Static Types. https://galois.com/wp-content/uploads/2014/08/pub_JL_Implici...

[2] M. Odersky, A. Biboudis, F. Liu, O. Blanvillain, Simplicitly: Foundations and Applications of Implicit Function Types. https://infoscience.epfl.ch/record/229878/files/simplicitly_...

[3] B. C. d. S. Oliveira, A. Moors, M. Odersky, Type Classes as Objects and Implicits. http://ropas.snu.ac.kr/~bruno/papers/TypeClasses.pdf


> Haskell's key innovation over ML was HKTs

In fact, ML modules have had higher-kinded types [1] since before Haskell even existed. I guess Haskell's main innovation in this domain is really its very convenient higher-kinded parametric polymorphism with type classes.

[1] MacQueen, D B (1984). Modules for Standard ML. Conference Record of the 1984 ACM Symposium on LISP and Functional Programming Languages. 198-207.


Thanks. That's really interesting. I need to read MacQueen's paper. Is this full HKT, in the sense that all constructions wiht HKTs can be encoded in MacQueen's system?


If by "all constructions wiht HKTs", you mean what can be done with HKTs in Haskell, then I'd say yes. It is well-known that ML modules provide a very advanced level of expressiveness, especially since OCaml's introduction of first-class modules. Also, Scala took this idea further and provides principled recursive first-class modules (which Scala calls dependent object types).

The problem is that modules in ML have a verbose syntax and are clunky to use compared to type classes. OCaml's modular implicits aim to make this better (see https://arxiv.org/abs/1512.01895), taking inspiration from Sala's implicits.


I understand that making modules first-class gives them a lot of expressivity. But they are a fairly recent development" at least in OCaml the come much after Haskell. But are modules in MacQueen's sense first-class?


> Implicits are a generalisation of default arguments.

There is an extreme semantic difference between an implicit parameter and a default argument. Have you spent any time at all using them? That's like claiming HKT is just a generalization of type constructors. Would you claim that HKT isn't anything new or novel because constructors have been around forever?


> Have you spent any time at all using them?

Sure I have used both.

Default arguments and implicits both have the same key idea: you can omit arguments to functions, and the compiler, guided by type information, synthesises the missing arguments during compilation. In order to understand the difference between both, it is crucial to realise that this compile-time synthesis of missing arguments has two related but different dimensions.

- Declaration that an argument is allowed to be omitted (and hence synthesised automatically).

- Declaration of the missing argument that is used in this synthesis.

Default arguments merge these two into one, e.g. with

   def f ( int x ) ( bool b = false ) = ...
   f (2)
   f (2)
all calls f(2) become f(2)(false). The problem with this is that the devault value to be used in synthesis cannot be context dependent.

Implicit arguments separate these two, enabling the programmer to make default context dependent, e. g.

   def f ( int x ) ( implicit bool b ) = ...
   implicit val c = true
   f (2)
   implicit val c = false
   f (2)
Now the first call f(2) is rewritten to f(2)(true), while the second becomes f(2)(false).

> Would you claim that HKT isn't anything new or novel because constructors have been around forever?

I'm not sure I see the connection: constructors are program constructs, while HKTs are "types for types".


> Default arguments and implicits both have the same key idea: you can omit arguments to functions, and the compiler, guided by type information, synthesises the missing arguments during compilation.

Saying implicits are a generalization of default arguments because they're both synthesized during compilation is like saying steam trains are a generalization of pipes because they're both made of metal. It elides too big a difference to be helpful, to the point that it's actually misleading.


I'm not saying implicits and default parameters are the same thing. I'm saying the key idea behind implicits is the realisation that default arguments merge two ideas that should be kept separate, if we want context dependent defaults. Some of implicit's other features can (and have been) added to default arguments, see my other reply.


> I'm saying the key idea behind implicits is the realisation that default arguments merge two ideas that should be kept separate, if we want context dependent defaults.

This isn't true though - that's not what implicits are for and not how they're used. Indeed the clearest proof is that Scala still has default arguments, and they still have the behaviour given in your OCaml example. They're different things.


I did not say implicits and default arguments are the same thing. I said that implicits improve upon default arguments in various ways, the key realisation being the split between providing elided values in a context-dependent way, and permitting the elision of values.

I should not have said the key idea, but a key idea.

Default arguments are convenient if you don't need context dependence of elided arguments. I'd probably have remove defaults in order to have a smaller and more simple language.


Maybe a disagreement here is fueled by the power level difference in these two features. Beyond the superficial injection examples, Scala implicits also support injecting values constructed dynamically and also recursive/derived implicits.


> injecting values constructed dynamically

You can also do this with default arguments, e.g. like so:

   let f x y = 
      let g (a = x+1) b = ...
      ...
For example the following is valid OCaml:

   let bump ?(step = 1) x =
     let waa ?( a = step+17 ) b = a * b in
     x + step + waa 666;;
Here the function waa has a default argument whose default argument depends on bump's default argument.

> recursive/derived implicits

Recursion is quite restricted, for example this is not valid Scala:

    def f ( x : Int ) ( implicit y : Int ) : Int = {
      implicit val a = 17
      f ( x+1 ) }


Your example of nested functions with default arguments does not correspond to what derived implicits do. As a result of implicit resolution, an expression with an arbitrary number of subexpressions may be synthesized, the shape of which depends on the types involves. Default parameters simply cannot do that.

The example you give to justify "recursion is quite restricted" is not a restriction on recursion at all, it's an ambiguity problem. Define `a` as `y` to shadow the function parameter, and it compiles.


> does not correspond to what derived implicits do

I did not claim it did. The first example disproved ionforce's claim that default values cannot be constructed dynamically.

> Define `a` as `y` to shadow the function parameter,

Whoops, yes that's correct. I didn't realise this was possible.


> The first example disproved ionforce's claim that default values cannot be constructed dynamically.

But I don't think that is what they meant. I think they meant something along the lines of what I said above:

> an expression with an arbitrary number of subexpressions may be synthesized, the shape of which depends on the types involves


OK, maybe I misunderstood ionforce. I don't deny that type-driven synthesis is a core element of implicits.


Thanks for the Simplicity reference, hadn't seen that yet. Seems like it builds on the implicit calculus [1], which applies more generally to any lambda calculus than in the Scala context.

[1] http://homepages.inf.ed.ac.uk/wadler/papers/implicits/implic...


Would love to see a citation re: Odersky (or if he'd chime in here).

My general assumption is that if Scala was intended to be category theory, implemented, something along the lines of scalaz or cats would have been built into the language -- the philosophy of the language does not include a JS-esque philosophy of small stdlib, big library ecosystem.


See this talk https://youtu.be/P8jrvyxHodU

He mentions module systems fairly early on.


What does this have to do with the original assertion that "one of Odersky's motives in creating Scala was bringing the power of Haskell into the JVM world"?

AFAIK, Odersky doesn't particularly like people trying to replicate Haskell patterns in Scala. For example, he thinks using monads for most effects is inappropriate.


Threading fail. I thought I was replying to someone who had asked for a citation for:

> Odersky has long claimed ML as his primary functional programming influence, not Haskell.


Disclaimer : I'm nobody important, but I do have an opinion

If scala team were to disavow sbt, that'd be the single best thing they could possibly do for the ecosystem.

I used to write a lot of scala, and working with sbt was enough to eventually get under my skin.

I really like some of the OOP aspects of scala, the left-to-right style of thinking matches how my brain works. Scala having a lot to offer can distract new learners, especially with just how overwhelming all the different features can seem for someone not coming from both sides. I had ocaml and ruby under my belt when I found scala, so it was an easier curve, but lots of guys struggle with it. I don't mind showing them, and of course I learn a lot from guys with different experiences and different backgrounds, but that's sometimes a complain I hear, also.

It's a language that doesn't compare very well. What is scala like? We don't know, there's only one language like it, it's a departure from many different paradigms.

I'm back to writing ocaml, all said and done. I enjoyed my time with scala, and interacting with my guys on the other side of the office still doing scala is always enjoyable. Smart guys! I'm not academic enough to grasp some of the new DOT stuff, but they are excited about it.

Anyways, I gave up hope a while back on scala getting rid of sbt. There are other build tools that exist, cbt, mill, maybe more, but without the blessing from lightbend or whatever they call themselves this week, it's not feasible, or responsible to deliver a solution to a client with an unofficial build setup. Of course, after five years of doing sbt I decided it's not ok to deliver sbt either.

Someday! Maybe?!


> If scala team were to disavow sbt, that'd be the single best thing they could possibly do for the ecosystem.

Seconded. Between http://www.lihaoyi.com/mill/ and https://bazel.build/, there are plenty of alternatives.


Thirded. There are some exciting things happening in the Scala build tool space right now. We're migrating everything to Bazel right now, including our Scala code. Bazel + Scala is not amazing yet, but so far I have enjoyed it much more than sbt. It's also very nice to use the same build tool across all our languages.


If you look at what they currently support in terms of plugin availability and stability these are nowhere near being viable alternatives at the moment.

The only remotely viable alternative to SBT I've found is Pants. (I guess Maven and Gradle might qualify too, but it's been so long since I've used them with Scala that I can't be sure.)


Scala enthusiast Jon Pretty just announced `fury` as well!


Do you have a link to that? I can't seem to find it.



I've been using Gradle to build Scala projects for a while now without any issues.


The thing with Scala if you’ve come from OCaml is there’s no substitute for real Hindley-Milner, and the Scala object system isn’t good enough to compensate


I've moved to Scala in 2012 (because I needed to work on the JVM) from over a decade of Ocaml. I found that the absence of full type inference was not a major problem. The JVM library ecosystem on the other hand ... lack of libraries (at the time) really hampered Ocaml.


Does anyone use Scala in cases where OCaml is a viable option? My impression has always been that Scala is "better than OCaml if you need the JVM"


Mainly Spark, but it really depends on the domain, what libraries exactly that you'll need, hell I look at what kind of deployment routine the client wants, what their maintenance crew looks like compared to what they want from me.

There is plenty of scala in fintech, I can tell you that. But there's plenty of ocaml, f#, matlab, excel macros, r, kdb, cobol, fortran, c++, a vast array of different technologies.

Most places that choose scala are choosing it over erlang or Java or c#, they USUALLY only choose scala over ocaml if they want to have some old ocaml code talk with new infrastructure that they want built in scala, ime. Because of how performance works and that multi core isn't working yet, ocaml is limited in application if they're willing to find or already have scala programmers.


I don't like having to give up full type inference, but it's much better than having to give up HKT.


You can get close enough to HKT with OCaml that you don't really feel any pain there. We have our share of generic monad libraries.


Up to a point. I had a look and an implementation of traverse is already getting pretty cumbersome, and something like recursion-schemes seems beyond what anyone's even attempted.


I don't think SBT is in any way officially endorsed in the first place? Many popular libraries use it, but there are plenty of other options. If you're not maintaining a public library that has to cross-build against multiple versions of Scala, IME there's no reason not to just use Maven, which gives you simple and well-documented builds.


Not quite so simple to just use other options.

ScalaJS for example mandates the use of SBT.



Maven works just fine.


Do you cross-build any libraries?


no


What problems did you have with sbt? Depending on how long a go you used it, those problems might have been fixed. In the last couple of years, a lot of simplifications have been made to how sbt build scripts are written.


SBT is the worst build tool I've seen in the 20 years of programming across a dozen languages.

And by a very, very long margin. It has arcane syntax, is slow, is incredibly complex if you want to extend functionality, it has multiple configuration files, the ridiculous Ivy global lock and inability to fetch Ivy artefacts in parallel etc. And that's just the start.


> SBT is the worst build tool I've seen in the 20 years of programming across a dozen languages. > > And by a very, very long margin. It has arcane syntax, is slow, is incredibly complex if you want to extend functionality

This seems a bit hyperbolic and a little vague. So, I can't really address it directly. Interestingly, I find sbt much easier to extend than most build tools because sbt tasks generally don't rely on side effects to communicate. Obviously YMMV.

> it has multiple configuration files

Doesn't every build tool have this? Or do you mean multiple config file formats? If that is the case, newer version of sbt have moved a single file format.

> the ridiculous Ivy global lock and inability to fetch Ivy artefacts in parallel etc

I can understand this concern. Ivy is kind of annoying. Fortunately, a new artifact resolver called Coursier[1] has been written and it solves this problem. You can use it now and it is currently in the process of being integrated as part of the default SBT install[2].

[1] https://github.com/coursier/coursier [2] https://github.com/sbt/sbt/issues/2997


Macros going wrong with complicated builds, implicits galore all over the place, just the usual complaints.

The thing that makes sbt difficult in practice for me is twofold. I don't want to deliver hackedy shit to a client that I then have to defend if their quality control comes back complaining about it. Being able to use regular scala in sbt definitions is convenient but at the cost of making things expressable that should wisely be done another way.

Which brings me to the second point which is getting a sbt definition from a client and then they don't want to change it, or they don't have quality control so it's just awful, or its so bad that someone has to rewrite it for the solution. The latter being the easiest to deal with, but that also costs time.

For those looking, what things do you feel have most improved the sbt experience?


> Macros going wrong with complicated builds, implicits galore all over the place, just the usual complaints.

I'm not sure I totally understand this. What do you mean by "macros going wrong"? It doesn't seem like there are that many implicits used. The mostly to add methods to strings to make expressing things more concise. Do you think implicits shouldn't be used at all?

> The thing that makes sbt difficult in practice for me is twofold. I don't want to deliver hackedy shit to a client that I then have to defend if their quality control comes back complaining about it. Being able to use regular scala in sbt definitions is convenient but at the cost of making things expressable that should wisely be done another way.

> Which brings me to the second point which is getting a sbt definition from a client and then they don't want to change it, or they don't have quality control so it's just awful, or its so bad that someone has to rewrite it for the solution. The latter being the easiest to deal with, but that also costs time.

This seems contradictory. It seems like in the first paragraph is at odds with the second. In the first paragraph, it sounds like you are saying that you want to be able to write messy builds and not have anyone complain. In the second paragraph, it sounds like you are saying you don't want deal with other people's messy builds. Am I understanding correctly? If so, do you want messy builds or not? How is this related to the build tool? What build tool prevents people from writing messy builds? Are you thinking of something like maven that is very rigid?

> For those looking, what things do you feel have most improved the sbt experience?

Some things that may (since I don't know when you last looked) have improved:

* SBT 0.13 greatly improved the .sbt build file syntax (no more line breaks between settings, multi project in one file)

* SBT 0.13 also cleaned up the symbol soup. Now to define tasks/settings you only need to know 3 symbols that are fairly self explanatory ':=', '+=', and '++='. If you want to depend on another task/setting output you just have do `taskName.value` in your task/setting definition.

* A new artifact resolver has been written[1] that fixes all of the issues with ivy. It can be used today and is currently in the process of being integrated in the default SBT install.

* SBT 1 introduced a build server concept with language server protocol support that is starting to improve IDE/editor integration

* This isn't strictly SBT but IntelliJ now natively understands .sbt files so you can autocompletion and code navigation.

[1] https://github.com/coursier/coursier


Please don't use "guys" this way. This is not PC nitpicking, It's important. "People" or "folks" may not have the ring you're after, but too bad.


I was in a meeting where this just happened. And as liberal as I am, it is exactly PC nitpicking. You can also choose to lead by example rather than correcting people. Despite all the good intentions in the world, people may not react well to this style, making your contribution locally ineffective.

This is double if the person you're correcting doesn't think what they're doing is "wrong".


Some of my biggest stressors in the past came from trying to change people or believing that I was entitled to have people act and react the way I wanted them too.

I don't give that privilege to others and I don't expect anybody else to.

I say guys because my developers and analysts and quality managers are all men. The only women I work with are two secretaries, and they're not even in my department.

What's too bad, friend, is that you have deluded yourself into believing you have the right of way over another person, for any reason whatsoever.

Bless your heart!


How is this not PC nitpicking. You really need a special kind of mindset to interpret completely harmless and generic term in this way.

Please take your SJW-ism somewhere else. This is HN.


Oxford dictionary:

1.1 guys People of either sex.


> Scala 3 code can use Scala 2 artifacts because the Scala 3 compiler understands the classfile format for sources compiled with Scala 2.12 and upwards.

I feel like this isn't getting much attention but it is a huge deal. A big part of the reason that Python 3 went the way it did was users didn't want to upgrade until their libraries did and libraries didn't want to upgrade until their users did. With Scala 3, users can upgrade pretty rapidly, freeing libraries to upgrade without fear of leaving their users behind.


Just curious, what companies (or kinds of companies) are betting on Scala nowadays? It seems that the people wanting "better Java" all decided they like Kotlin, and the functional programming people now gravitate more towards either F# (for .NET ecosystem) or OCaml/Reason (for the more unixy world). And the academic/research/learning crowd seems to like Haskell more.

Who's still in Scala boat? Are Google or Fb or other big cos known to give back to open-source growing any Scala codebases now?


I'm working since 3 years at a growing startup that has been hiring Scala Engineers at least every half year since I started.

In the last ~18 months, the amount of CVs we're getting has been constantly increasing. Part of that is probably that our startups hiring matures, but it's also the kind of CV that is changing.

I'd say that 3 years ago, there was an 80% chance that the applicant was highly self-motivated to learn Scala in their freetime, and tried/did introduce it at his/her current workplace. Today, there is an 80% chance that the applicant either "had to" learn it in their current workplace, or learned Scala when switching jobs. (Don't get me wrong, they're still motivated, and they took the chance when it was there!)

So there is a switch from the Early Adopters to the Early Majority (where the Early Majority now has worked 1 or 2 years with Scala at their current job, and is confident enough to look for a new one).

One driving force was definitely Spark, but there are a lot of Enterprise apps, unrelated to ML (usually with higher traffic requirements). The sort that would have most likely been done with Java or C# 5 years ago. It seems a lot of Enterprises introduce Scala when they try to break up their (Java) monolith into microservices.

So it seems that Scala has been carving out it's place in backend/microservices with scalability requirements, and is eating part of Javas cake there.


> It seems that the people wanting "better Java" all decided they like Kotlin

I've been using Java and Scala exclusively for the past 10 years at two large NY banks and two fintech start-ups, and I literally do not know anyone who has ever compiled a Kotlin program.

Simply extrapolating from one's own experience is fraught with peril.


F# only has access to a subset of .NET deployment scenarios, where Scala can be used pretty much everywhere there is a JVM.

So most companies actually are more keen to bet on Scala than F#.

OCaml/Reason world lacks the wealth of Java libraries, which are an import away on Scala.

Regarding Kotlin, so far its killer use case is targeting Android, where one is stuck with an ageing Java subset. Outside of Android, it remains to be seen how it will keep up with the Java improvements coming up every 6 months now.


I don't think many companies are betting on Scala anymore. F# is just as irrelevant in the grand scheme of things.

Let's face it, Kotlin hurts Scala adoption.


Kotlin and Java 8. I started using Scala in Java 7 era and loved it. When Java was stagnating it seemed like an alternate language that targeted the JVM was the best option. Progress with the language has really picked up a lot. There is a lot that Scala has that Java doesn't, but now its not enough to make it worth it. Java is so much easier to work with on a large team.


The two languages though also have preferred libraries / frameworks. I code in Java again and the main thing that bugs me is being always in the Spring ecosystem. There is a mindset in javaland that every technology needs to be wrapped by Spring. Scala gave me the freedom to choose alternative libraries.


Oft. I agree with you there. I never enjoyed spring. I don't get the appeal. With scala I loved play framework. I know it has java support but I never tried it. It used so many scala idioms and features I'm not sure how well it would translate.


Depends where you work.

I know Java since the early days and only used Spring during one month as validation of an architecture proposal.


It seems like it. I recently took the liberty to update the language rankings on https://en.wikipedia.org/wiki/Scala_(programming_language)#L... and compared to 2016, Scala took a hit on pretty much every ranking.

Outlook largely negative, falling in rankings across the board.


Kotlin has many benefits that Scala has in terms of “being a better Java” without sacrificing the readability and usability benefits of Java itself.

Scala is a fantastic language, but it’s not one your average Java developer can pick up in a day or two.


What readability and usability benefits are you claiming Java to have?


Java is a simple language, the syntax is easy to parse and logic is generally straightforward to reason about - as a programmer you have very little ability break expectations of how the language itself behaves (no operator overloading, etc).

Scala, on the other hand, gives you a huge toolbox down to some really complicated to reason about features like implicit parameters, creation of completely arbitrary operators, etc.

[Insert some funny pun here about Java giving you a simple tool while Scala gives you an incredibly complicated one]. They're both great languages, but they serve very different purposes and audiences - Kotlin happens to fit Java's demographic better than Scala as a result, it doesn't have the magic and complexity to the same degree Scala does (the most confusing new constructs probably revolve around builders/lambda's with receiver types which aren't needed by most developers not writing DSL's).


i am almost never surprised when I am reading java code. Its almost never about the code.

With scala I seem to have wtf moments about the language every once in a while. (eg: magnet pattern)


That assumes that Java is the only funnel for Scala. Sure, Kotlin likely now captures more of the people trying to get out of Java, but how many people does it capture with backgrounds in Haskell, OCaml, Lisp, R, Python/Pandas, or any other language for that matter?


Aren't there so few people with backgrounds like that (Python aside) that it doesn't really matter?


No? I came to Scala from R and Clojure. The point is that Kotlin has pigeonholed itself into being a language for people who can't stand Java. Scala is partially that, but it is also vying for the attention of anybody who does object oriented programming, anybody who does functional programming, anybody that cares about type safety, etc.

The idea that Kotlin is hurting Scala adoption rests on the assumption that the only people who would ever adopt Scala are the people who are trying to get rid of Java.


Not only on JVM, scala can be used in the browser with scala.js or without a JVM with scala native.


Yep - you're right... F# definitely can't be used in the browser: http://fable.io/


F# is now supported within the .NET core world so it opens up the deployment scenarios beyond anywhere traditional .NET would live.


Not everyone is getting into .NET Core.

In fact, Microsoft has to advocate library writers to actually care about .NET Core and .NET Standard.

https://channel9.msdn.com/Shows/Visual-Studio-Toolbox/NET-St...


How is this relevant to the discussion though? F# works on either .NET Core or the regular .NET Framework.


Eh, it mostly works on .Net Core. There's still some bugs with the compiler around Generative Type Providers when running the compiler on the .Net Core runtime, forcing you to compile with a build of fsc running on mono or the full .Net Framework.

Still, for all intents and purposes you are correct - but there's still some growing pains in the tooling that are hard to ignore.


Meh, the reason many people weren't targeting .Net Core until recently was because of the huge pain in the ass it involved. .Net Standard 1.0/1.1 were missing a LOT of API surface area that many libraries needed. .Net Standard 2.0 fixes this, and makes life easier for library authors who already support multiple runtimes in the process by providing a sane upgrade path from the hell that is PCL's. It's all about tooling, there's no reason for library authors to not target .Net Standard in their projects now unless they really need some Windows/Full-specific assembly that isn't included.


> Meh, the reason many people weren't targeting .Net Core until recently was because of the huge pain in the ass it involved

+1000. It wasn't even library support, it was the tooling that was a huge pain in the ass. Half the time it didn't work right, builds broke unexpectedly, build file formats kept changing, cripes what a pain. I swore off it until the early .NET 2.0 standard prereleases when things seemed more stable, and it's been much easier to port my libraries.


I've been using .NET Core since it was released. Sure in the beginning library support was spotty at best, but that was quite a while ago. I have yet to find a popular library that doesn't support .NET Core/Standard.


And Clojure? I find it to be the best experience on JVM.


My favorite Lisp variant.

I have not listed it, because in spite of spec, it still is a dynamic language.


> F# only has access to a subset of .NET deployment scenarios

And what subset would that be? The only place you can't use F# IIRC is UWP application which is likely not the deciding factor in choosing F# or Scala.


Anything GUI related, and their respective tooling support on Visual Studio, which it never got.

It is still playing catch up with VS 2017 tooling for VB.NET and C#.


About GUI on .NET Framework:

- Windows form works but not perfect. The editor works if you install the template for it, but not the auto generation of code, like double click on button -> handler generated. You write code programmatically. but is used a lot. I use it in the repl (fsi), to generate chart, custom data visualization.

- WPF the same, works, no editor (but codegen is less needed)

- Xamarin support F# ( https://docs.microsoft.com/en-us/xamarin/cross-platform/plat... ) on Forms, etc

Some good from community+MS, because community tried to adapt techonologies and make it more friendly to use in F#

- Xamarin XAML in Elmish style, really more idiomatic ( https://github.com/fsprojects/Elmish.XamarinForms ) from F# creator itself (Don Syme, who now work on xamarin division too)

- WPF and Xamarin xaml can be used with a type provider too for statically type view at compile type ( http://fsprojects.github.io/FsXaml/ )

The only one not yet supported is UWP, because of of .NET Native.

All that without speaking of the gui stack outside .net framework, like electron+fable or just fable+elmish/react/react.native ( https://github.com/SAFE-Stack/SAFE-Nightwatch )


GUI development without designer support is just like time traveling to implementing Turbo Vision and Clipper applications on MS-DOS.

Never understood the mentality for designing UIs by coding instead of visually.

I care for what comes in the box, and is directly supported by Visual Studio and Blend.

If someone needs to lose their .NET GUI tooling productivity to embrace F#, then better wait while C# keeps getting F# most relevant features.

Even C++ has better UI tooling support on Visual Studio than F#.


Depends for what you need GUI. You need it for complex LOB app? So no, editor will be good.

Personally, i write c# and xaml, and i dont use the editor (vs or blend), but i edit directly the xaml.

About f# and gui, depends on use case. For example https://fslab.org/XPlot/ to show graphs.

  let series = [ "bars"; "bars"; "bars"; "lines" ]
  let inputs = [ Bolivia; Ecuador; Madagascar; Average ]
  
  inputs
  |> Chart.Combo
  |> Chart.WithOptions 
       (Options(title = "Coffee Production", series = 
          [| for typ in series -> Series(typ) |]))
  |> Chart.WithLabels 
       ["Bolivia"; "Ecuador"; "Madagascar"; "Average"]
  |> Chart.WithLegend true
  |> Chart.WithSize (600, 250)

this is used to generate programmatically i chart. inside a window with some layout, usually in enough. And i can do testing in the repl.

But yes, if you use editor like c# version, F# is less nice to use. But again, programmatically allow other things, like https://github.com/fsprojects/Elmish.XamarinForms

So depends how much time you edit the view (and why), vs gains in the logic behind the view. For me the global tradeoff, but depends, so you are right.


The only time I have ever felt compelled to design a UI visually is when working with iOS or macOS because the framework is so centered around Interface Builder. When I write a WPF or JavaFX view I am not using Blend/Scene Builder to drag and drop controls, but to have a mostly-accurate preview of what crap looks like without having to build and run.


Ever heard of the web?


Yes, it keeps trying to reach parity with native UI design tooling both on desktop and mobile OS.


If you're waiting for feature parity with C# in VS, you'll never get it. There are millions of C# developers and F# devs are counted in the tens of thousands (sadly) so there's a reason for that.

I don't believe most organizations considering adopting F# or Scala are considering them for GUI development so I'm not sure why you'd rule out F# because of that


This is kinda true, but not in the sense that you mean it.

And maybe not across the board, but my clients are very comfortable with the (very few) F# solutions we've done. Ocaml clients generally use it for stability, or correctness, they certainly aren't competing in the same sphere that kotlin, scala, Java really operate in. Prominent in the financial spade especially, but the trend is the same everywhere.

Reason may eventually help ocaml move closer into those spaces, as people are coming to ML with more openness and are more eager to learn and get involved, so that'll be cool to see in the future.

Anyways, don't count F# out either. I'm no expert, only ever used it with other project leads and with smaller clients, but MS is doing impressive work lately, like NET Core and stuff like that. They at least can show that making net more visible and flexible is part of what they're after.

Java won't be killed. Ever. But people may stop writing it eventually.


> F# only has access to a subset of .NET deployment scenarios, where Scala can be used pretty much everywhere there is a JVM.

This statement is wrong. F# can use .NET Core, and it works same as JVM.


Scala is used pretty heavily in the big data world, particularly if you are working with Spark.


Not to mention, Flink, Akka, Apache Beam (via the Scio library or the Java api).


Kafka too


Kafka switched to java 100%, afaik.


Python is still king there, even DataBricks market it that way. Scala is for advanced stuff that matters.


Python is king in analysis, but for big data engineering, most of the building blocks (as mentioned elsewhere in this thread, Akka, Kafka, Flink, Spark, etc) are written in Scala


It doesn't matter. Because pyspark is still the go-to language.

In fact with spark 2.3 python UDF, the performance gap has also reduced. https://mindfulmachines.io/blog/2018/4/3/spark-rdds-and-data...


pyspark might be the go-to language for data scientists playing with the spark repl, or MLLib, but for production data engineering, scala is still king.

Besides performance and the obvious fact that not knowing scala makes it difficult to understand the underlying Spark code, there are multiple ways in which scala is more natural to develop in (many libraries are for scala only, for example).


I don't think so. Python and data frames is arguably more natural to think about and reason than scala.

I have no doubt that scala is more performant and the "fat" jar mechanism makes dependency management and codeshipping very easy (it's still tricky to install python dependencies on your spark nodes), but the pandas ecosystem is definitely more intuitive to understand.


I have the impression you are leaning towards thinking of data analytics (pandas, data frames, etc) whereas I and some other commenters may be thinking more of more data pipelining kind of architectures, where you can't afford wrong typing, scale is quite large and you are not even doing the kind of operations pandas dataframes are useful for


It depends on the use case. Our work primarily revolves around extending Spark with custom pipelines, models, ensembles, etc. to be deployed into our production systems (petabyte scale). Scala was really the only way to go for us.


I can understand performance difference, but I have not generally seen a difference in building custom pipelines and ensembles .. although I grant I'm not at your scale yet.

What kind of specific pipelines did you have trouble in pyspark ?


Although we decided to start using Scala specifically because PySpark was not as performant (2.0 is not so far ago), a reasonable use case I keep always in mind is aggregation (and in general any API which is still not solid/experimental/under work). Python bindings are always the last to be available (because all groundwork is being done in Scala). We have a relatively large scale process that takes advantage of custom-built aggregation methods on top of groupedDatasets, where we can pack a good deal of logic in the merge and reduce steps of aggregation. We could replicate this in Python using reducers, but aggregating makes more sense semantically, which makes the code is easier to understand. Also, the testing facilities for Spark code under Scala are a bit more advanced than under Python (they are not super-great, but are better), even without considering that being strongly typed makes a whole kind of errors impossible, right out of the compiler.

I very, very rarely think of using PySpark (and I have way more experience with Python than with Scala) when working with Spark. In a kitchen setting, it would be like having to prepare a cake and having to choose between a fork and a whisker. I can get it done with the fork, but I'll do a better and faster job with the whisker.


I will stay away from veering into a statically typed vs dynamically typed conversation here ;)

But I'm very excited about pyspark 2.3 UDF bringing grouped map . It will be interesting to hear your views on that https://databricks.com/blog/2017/10/30/introducing-vectorize...


Only checked the implementation of the "Arrow UDFs" recently, because I'm interested in the Arrow interaction (for curiosity), so still don't have a strong opinion. My main concern is that a lot of the PySpark systems are playing around how to interact and speed up the systems while still staying on top of the Scala base.

I'd recommend Dask (haven't tried it much but from all I've seen is top-notch) to anyone who wants Python all the way down (at least until you hit the C at the bottom) ;)


well we run a hundred machine cluster on Dataproc for doing our stuff. Dask is still not battle-tested, cloud ready (or available) and is generally harder to work with than pyspark.

In general, I will stay happily in the spark world using pyspark rather than go to Dask right now.


Being able to pass data through Arrow is a big improvement, but there's also a lot of serialisation going on you pay in Python. Also, if you want to do anything in the fancy areas (like, write your own optimisation rule for the SparkSQL optimiser) it's Scala. Even something simple as writing a custom aggregator is impossible in Python (at least it was in 2.2, haven't checked in 2.3 or "current" 2.4)


Scala is still primarily used for data engineering workloads due to the fact it is a JVM language. (There's Java too, but no one wants to write Java code)

PySpark is often used for data science experimentation, but is not as frequently found in production pipelines due to the serialization/deserialization overhead between Python and the JVM. In recent years this problem is less pronounced due to the introduction of Spark dataframes which obviates the performance differences between PySpark and Scala Spark, but for UDFs, Scala Spark is still faster.

A newer development that may change all this is the introduction (in Spark 2.3) of Apache Arrow, a in-memory column store engine which lets Python UDFs work with the in-memory object without serializing/deserializing. This is very exciting as this lets Python get closer to the performance of JVM languages.

I've played around with it on Spark 2.3 -- the Arrow interface works but still not quite production-ready but I expect it will only get better.

Many folks are making strategic bets on Arrow technology due to the AI/GPU craze (and an in-memory standard enables multiple parties to build GPU-based analytics [1]), so there is tremendous momentum there.

At some point I expect the relative importance of Scala on Spark will decrease with respect to Python. (even though Spark APIs are Scala native)

[1] https://www.nextplatform.com/2017/05/09/goai-keeping-databas...


Scala has an insanely productive and efficient streaming ecosystem with the likes of Kafka and Akka streams. You can use those with other languages but it's not nearly as nice.


Why isn't it nearly as nice? Is there something particular about Scala such that Kafka or Akka are best implemented in it? Could you give some concrete examples (e.g. compare it with Java, C, C++, Rust, Haskell)?


(Compared to Java, C or C++ there's all the usual ML-family goodness - first-class functions, algebraic data types with pattern matching, type inference).

Scala has a for/yield construct similar to Haskell's "do notation", which is the perfect way to work with async code - it strikes the right balance of avoiding an unreadable callback pyramid of doom, but still your async yield points visible (the difference between <- and = is about as concise as it gets, but still very visible). And having HKT and allowing you to express concepts like monads means that a whole library ecosystem can build up of standard operatons operations (e.g. traverse, cataM) that work on async futures and also on other contextual/effecty types (e.g. audit logging, database transactions). That's the big advantage it has over Rust and most other competitors, and it particularly shows in things like Kafka/Akka that need to work with async a lot, but also in large programming projects generally. (What it shows up as in practice is that you can do everything in "plain old code" - all the things that need reflection/interceptors/macros/metaclasses/agents/... in other languages are just libraries in Scala - and that makes development so much easier and bugs so much rarer)

Haskell has all that - the trouble with Haskell is that "there's no way to get to there from here". I was able to go from writing Java on Friday to writing Scala on Monday - not great Scala, but working Scala, and I was just as productive as I was in Java. I couldn't've done that with Haskell.


I think the parent comment was talking more about using those libraries with Scala, not necessarily implementing them.

Can't speak too much about the others, but I recently looked into using Akka with Java. Akka itself had a good deal of documentation for Java users. However, other related products (look tools for monitoring Akka, etc) were clearly treating Java as a second class citizen. When there was documentation, it would be outdated, and unmaintained. I ran into this over and over again, which weighted heavily on my decision to not use Akka. I'm not quite sure if there is anything about the language that would have made it particularly challenging (doubt it), but it definitely seems like the community built around it is more Scala centric. And that in itself makes it challenging to use with anything else.


I took the comment to mean that Scala is the best choice for having implemented, e.g. Kafka, and that other languages somehow aren't suited for it, and asked the question with that in mind.


I still think Scala is a better better-jav than Kotlin, but Kotlin captures that market better because of the much lower learning overhead.

I'm much more of an ML-style programmer, but the thing keeping me away from OCaml is the lack of a multicore runtime. Not only is Scala's concurrency mature and performant, implicits (like having an ExecutionContext) make it extremely easy to use. Even if OCaml released it's multicore runtime today, it would take years to get to the maturity of the Scala ecosystem.

Plus, occasionally I run into areas where OOP really blows away functional programming. It's nice to not have to learn a new language for those occasional times.


OCaml fully supports OOP (the 'O' in OCaml).


> Just curious, what companies (or kinds of companies) are betting on Scala nowadays?

LinkedIn, Twitter, The Guardian, Morgan Stanley, Barclays, Zalando. Generally speaking, Scala is used a lot by companies involved with Big Data (because they use Spark).


I’m not sure if your info is dated or mine is, but all the systems development at LNKD that was being done in Scala is now done in Java. There is some legacy Scala still but as of a couple years ago they decided no new development. Perhaps they are still using Scala in a data science context, since it is hard to avoid there, these days.


It's trivial to avoid it in data science, with even modest to large (although not "Big") data in the few TB range. My data science team's entire stack is basically Python, with a smidge of Java and Rust for infrastructure (soa, pipelines) development.


I almost added the clarifying statement "on the JVM" but got lazy. Yes, Python is the obvious choice for data science until you hit a certain scale.

It's not even clear to me that most companies doing data science in Scala have that scale -- they're just using tools and libraries companies at that scale have open sourced. You could call it cargo culting, but I think it's more nuanced than that. I think engineers can be separated into two camps roughly: those who are passionate about the language(s) they use, and those who are simply trying to get the result they need and don't care what language they use to get it. A lot of data science engineers naturally fall into the second bucket, so using Scala because a library they want to use is written in it comes naturally, even if they could get the job done in Python (possibly with a bit more wheel reinvention).


I'm in the second camp, actually.

Python is a good choice for data science even at relatively large scale. I'd question it's suitability for stable, scalable deployment in production (not to say it can't be used then, just that I wouldn't necessarily reach for it first, preferring either C++ or Rust for that).

Scala just doesn't figure into the picture at all. I consider that some "Big" data tools were written in it to be a matter of trivia and not essential to the work of data science.


I think your productivity in Scala would be quite a bit higher than in Rust. I've done reasonable amounts of Rust and quite a lot of Scala, and _given the current state_ Rust is simply slower to develop in. C++, well, you know what the downsides there are if you prefer it over Python.

I think for a particular data science mindset (the category theory toting, bijection loving person) Scala actually _is_ essential to the work of data science. But these people are in a minority.

Anyway, if you're truly in the second category, then the fact that the best library for doing X is in Scala would mean you're going to write some Scala, despite the fact that it's ~accidental that it was written in that vs Python.


The control plane for Cisco Hyperflex[1]'s distributed storage uses Scala. We're hiring[2].

1- https://www.cisco.com/c/en/us/products/hyperconverged-infras...

2- https://jobs.cisco.com/jobs/SearchJobs/hyperflex?3_12_3=187


This doesn't mean they're using Scala to develop, necessarily. Some architects at the company I work for decided we needed Kafka, but there isn't a single line of Scala being written by anyone here.



Twitter created a website "Scala School" awhile ago, so I'm guessing they didn't jump ship yet (but could be wrong). https://twitter.github.io/scala_school/


Twitter is still very much heavily committed to Scala.


Interesting that you see it that way. I see Haskell and Purescript usage growing, and I don't know many people interested in O'Caml / Reason. Not trying to say you're wrong---just commenting on the effect of our respective filter bubbles.

As for Scala usage. I'm a Scala consultant, so I'm biased, but I'm seeing a lot of adoption. Most finance companies and most media companies are using Scala.


We use Scala as the primary backend language at Elastic for the Elastic Cloud SAAS & enterprise products (along with Java, Python and Go).


Yeah, I don't think Scala 3 changes anything with regards to perception and/or adoption of Scala. That ship has pretty much sailed.

Martin Odersky seems like a really good language designer. I took a look at the Scala 3 languages features, and a more radical departure from historic Scala probably would have helped Scala.


Three or four years ago the biggest criticism of Scala was that releases were breaking backwards compatibility too much. On this very thread you'll find people worrying that Scala 3 will create a Python 2/3-style endless awkward transition. At this stage it's a mature language with a big established ecosystem, it can't afford to break too radically.

(Did you have specific breaking changes in mind? The most important changes I'd want to make to the language - having a syntax for guaranteed-safe equality comparison and guaranteed-safe pattern matching - could be done in a backwards compatible way, and the only other thing I can think that I'd like would be Idris-style totality checking which could also be backwards compatible.)


I wouldn't have called it Scala 3. And if I had the big brain of Martin Odersky, 5 years ago I would've had started thinking about an intermediate step in programming productivity, correctness - instead of doing Doty. I don't follow Scala that much anymore, but it seems that it's mostly a reworking of the internal consistency of Scala from an implementation perspective. At least that was the gist of it I got from watching Odersky give a talk about it a couple years ago.


> 5 years ago I would've had started thinking about an intermediate step in programming productivity, correctness - instead of doing Doty.

Wait what? In the last post you were complaining it was too close to existing scala, now you're saying it's too big a step? Again, what is the change you're actually advocating?


You misunderstand. The intermediate step in programming productivity has nothing to do with any particular language. I would have not done "Scala 3", but something else that implements "intermediate step up in programming productivity and correctness". Probably something a bit radical, but hey development is so stagnant when it comes to way we program, it would've been worth the risk.

Hint, the stuff that Chris Granger was trying to accomplish.


I find Scala is the thing that's actually advancing programming productivity - even when it comes specifically to IDEs/HCI/visual programming - whereas I expected Granger's effort to fail as it did. So I'm very glad Odersky's continuing to focus on Scala; if anything I was worried that his efforts on Dotty were detracting from maintenance and improvement of Scala proper.


Could you (and perhaps others) say more about this? I was excited to read that key goals were simplification, clarification, and consolidation. But my exact worry is that they didn't go far enough.

I liked the idea of Scala and took Odersky's course. But I built a few things in it and it was never not frustrating. What Bruce Eckels said resonates with me: "I’ve come to view Scala as a landscape of cliffs – you can start feeling pretty comfortable with the language and think that you have a reasonable grasp of it, then suddenly fall off a cliff that makes you realize that no, you still don’t get it."

Then I watched this Paul Phillips talk, which convinced me that the problems were deep, not superficial: https://www.youtube.com/watch?v=4jh94gowim0

My hope with Dotty, etc, was that they had learned a little humility and were pruning back enough to make it a decent developer experience. But I could well believe that it was impossible to do enough and still end up with something that could be fairly called Scala.


> a more radical departure from historic Scala probably would have helped Scala.

Maybe. But high-risk. See Perl 6.


I know quite a few finance companies use it - Morgan Stanley and Goldman Sachs come to mind.



N.B. Scala and its ecosystem is actually a lot more oriented towards FP than F# or OCaml for that matter.

Its popularity also far surpasses F# and OCaml in jobs available, books, libraries or other metrics that count.


We use Scala heavily. If kafka/Spark/Monoids/Semigroups/Cats/Algebird makes any sense to you and you are looking for a job, send your resume to mansur.ashraf@apple.com


Here's one datapoint: AutoScout24 (the vehicle vertical of Scout24, a Europe-based digital marketplace) has fully committed to Scala a couple of years ago as part of a re-platforming (microservices on AWS). It's still the recommended default language.

Not a "big" company by global standards, but not small either, with around 120 engineers.


Twitter now is running its Scala code on GraalVM.


Woah, can you provide a reference? I'm curious about the rationale and performance implications.



Did you find any benchmarks with hard numbers regarding performance improvement?


> functional programming people now gravitate more towards either F# (for .NET ecosystem) or OCaml/Reason (for the more unixy world). And the academic/research/learning crowd seems to like Haskell more

What if you want functional programming without having to go to MS stack and stay on JVM. You only have have two real options clojure and scala.


If you make heavy use of Spark there is still no better language/ecosystem than Scala.


F# and OCaml? What source do you use to say that? Scala is way more popular in the industry that these 2 combined.


Expedia is a big time scala shop, but new stuff is trending towards Kotlin.


How have people faired with Scala integration with Java libraries? When Scala first came out, I liked the idea. If I wanted to use X library, great, I can. As I pondered it more, this seemed terrible. Say I hug the functional side of Scala. I know what I'm doing. Then I hire a Jr Dev. She comes from the world of Java/Python where functional is no as big of thing (keeping in mind that while Python does support, to a degree, functional, many use it as a procedural/OO language in school). Suddenly I have to worry about the Jr Dev doing silly things like introducing mutable Java objects into functional code. Side effects could start to show at odd times, which makes debugging terrible in a production environment.

All that said, how have others dealt with this issue? Did you just not work directly with Java libraries? Did you use the OO side as a Better Java?


There are many answers, often related to company size and interest in teaching. Some large bay area companies think that training people into a FP style of Scala is too expensive/hard to hire for, and end up using it as a nicer Java. Smaller companies that do not hire 100 people to work in Scala every year just bite the bullet, expect FP, and end up wrapping a large majority of Java libraries with FP abstractions, some thin, some quite big.

I prefer an intermediate solution when I can get away with it: Localize the mutable Java objects as much as possible, and just make sure that I don't leave a team/teammate that has little Scala experience all alone for a while. This often leads to styles that might be frowned upon by both camps of programming, but in my experience, dropping to imperative code when FP solutions are harder to optimize, while making sure that mutability is well contained and doesn't cross interfaces is Scala's happy place. Depending on the work to be done, each codebase can be pretty FP heavy, or be mostly imperative with an FP facade.

The real trick with Scala is really library design though: It's very easy to make a new library that has dozens of new concepts and is hard to learn and use, all while exposing things like Shapeless HLists to the outside world, while libraries that are easier to consume and don't crush compilation times through type magic are often tougher on the author, leading to more code generation and macros. Most library authors know so much Scala that they don't realize that just using their library well incurs in quite the mental cost to new developers.


I find that once you're working in immutable/pure style it's worth having an explicit boundary between Scala and Java, a class that exists only to wrap the Java thing you're using, and really it's pretty easy to do so - Java-style code sticks out immediately in code review (though you can implement rules to flag it if you really need to).

I've done all of these approaches in various projects - using Scala just as a better Java, with Java-style code, using Java libraries, then writing code that was in a more immutable style that used wrappers around Java libraries, then writing very pure (i.e. no unmanaged effects) code that either used scala libraries, or used Java libraries only in an interpreter for a monad. All of them work. Mixing mutable objects and pure functional style is never going to go well, and that's as true in Python-only projects (say) as it is in Scala/Java hybrid projects; the language interop doesn't leave you any worse off than you were without it, if that makes sense.


> How have people faired with Scala integration with Java libraries?

It's useful because the JVM ecosystem is much larger than the Scala ecosystem. There's almost always a library for your purpose. If it isn't as idiomatic Scala as you'd want you can often easy create a small wrapper.

> Suddenly I have to worry about the Jr Dev doing silly things like introducing mutable Java objects into functional code.

code reviews and education


Code reviews. Let developers peer-review each other's code changes and share knowledge about the functional programming practices used in the codebase. This is not a technical problem and has nothing to do with Scala.


Language design can discourage those bad practices and lessen the workload of code reviews.

Catching bad practices and anti-patterns in code review sounds fine on paper, but I'm sure we've all been in the situation where there simply isn't enough time in the day to do intensive code reviews while still maintaining the pace needed for delivery.


I’ve programmed in Scala most days for the past 3 years, we use plenty of Java libraries, and it works well. Generally it’s as simple as a thin wrapper around the library, that converts mutable data classes to immutable versions (or vice versa), and wraps blocking methods in futures. Sometimes it’s more complex than that, but when I’m reaching for a Java lib, normally it’s just for a few methods that are easily wrapped.


Most Java libraries that would cause issues due to blocking or side effects are covered by the big Scala frameworks like Play/Akka or the large healthy ecosystem of alternatives such as those from Typelevel, 47Degs and many others. If I do need to pull in a Java library it's usually used for something minor and without side effects. On occasion I've had to write small wrappers around blocking or mutable Java API's but it's no big deal. wrt to junior devs they generally write code based on established patterns and we review everything in detail.


When interacting with a Java library, I use scala.collection.JavaConversions to convert any Java structures to the equivalent Scala collection. Once I'm done operating on the collection (map, filter, etc.), I convert it back to Java.


You should be using scala.collection.JavaConverters instead, as it makes the conversion explicit (by adding a ".asScala"/".asJava" method to collections).

Implicit conversions (i.e. where a type is silently converted to another type) is dangerous and can lead to very surprising problems.


Whoops, my bad about that. I was on mobile and forgot which of the two provides `.asScala`...


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: