Hacker News new | past | comments | ask | show | jobs | submit login
Scala projects are difficult to maintain (mungingdata.com)
166 points by MrPowers on March 22, 2021 | hide | past | favorite | 231 comments



I had a startup that went all in on scala. By the time we realized we chose the wrong language it was too late.

Complexity is the primary issue with the scala language... when the whole goal is to have a scalable language which in itself is diametrically opposed to simplicity, your language is dead on arrival.

After using Scala, Go seemed like a dream come true.

We really loved the integration of OO and FP, really miss the FP awesomeness honestly... but simplicity trumps doing either of those things well.


There is a tendency I observed in engineers to not want to appear dumb. This is a recipe for disaster when coupled with a deep language like Scala, because nobody wants to solve problems the simple way, and Scala gives a tremendous amount of rope. That's why I think it's only a suitable choice for very experienced or disciplined teams, with a strong consensus on style or any DSLs. I can see average enterprise shops lacking these things and ending up with suboptimal results if Scala is used.


> to not want to appear dumb

Few production incidents are enough to change this mentality. I'd rather write more verbose but "obvious" code than "clever one liners". I feel sorry for anyone who has to understand someone's DSL on the fly when there's an outage and your business is burning through X$ per minute of downtime.

(On the other hand, you probably want to structure your production changes in the way that they are easy to roll back without understanding everything, but that's other conversation. Either way, the thought that someone may want to chase me after work because my code broke and they can't understand it is enough for me to stop being clever).


I think Scala needs to be treated something like how smart shops treat C++: there's so much there you can hang yourself with, so you need to define a subset/dialect and create a style guide and stick with it.

This is how C++ has been successful at Google, and it's how I'd approach something like Scala if I were to go back to doing it now.


I don't disagree, but Java keeps adding features that make it closer to Scala, and Kotlin is well-supported and has cleaner syntax, so you have better choices than C++. The space between C and C++ is too fragmented, and too many of the languages are niche.


Yes, I haven't been in the Java world for almost 10 years now, so I can't comment too much on that, but it seems like Java improvements & Kotlin have vacuumed up the reasoning for using something like Scala. When I used it 12 years ago, the core Java language itself was terribly anemic.


Java stagnated for a long time at 7. Java 8 really started the change and it has been moving forward at a good pace since then.

I'm going to say maybe too fast. Not that the changes aren't good (they are good) but rather that the pace of features from 8 to 11 and then 11 to 17 has been fast and trying to get old toolchains updated to 8 (yes, it pains me) - much less use the features that are present in 11.

While in the organization that I work in there was some "ok, we've got old things on the computers, update to the latest Java now?" was postponed until September so that we can say "Java 17 is the standard" and have that be our long term platform for awhile.

But there are a lot of people who are still writing code that would compile in 7.

Looking at the version history, 12 years ago would have been Java 6... there were a lot of good changes with collections in Java 7. https://www.oracle.com/java/technologies/javase/jdk7-relnote...

The try with resources and catches were the two big things that made 7 much better than 6 for me. And with 8 - https://www.oracle.com/java/technologies/javase/8-whats-new.... - while:

* Classes in the new java.util.stream package provide a Stream API to support functional-style operations on streams of elements.

is only a single bullet point, that's a big thing.


That's how it is successful. HN said my post was too long, but here's my rebuttal to the op and it pretty much goes along with what you said: https://gist.github.com/jackcviers/a7e74a3ad0a57f6ab97afd25c...

If you are in the databricks Spark ecosystem -- stick to using the Databricks style and Spark libraries. Don't bring in anything that isn't in that ecosystem, because it runs way behind standard scala -- it's on a 6 year old version at this point.

If you don't want java to eat the features you use, use either the typelevel ecosystem or the ZIO scala ecosystem of libraries, use their respective preferred built-in DI methods (tagless final and ZIO environments, respectively).

If you don't mind eventually having java eat your features, go ahead and use the Lightbend ecosystem and buy the support license. If you don't want to buy support, the twitter/finagle ecosystem is basically the same without a support possibility. But within those there's definitely room for creative license.

All in all, the functional and typelevel subset works best as it's a tiny footprint of features and libraries, it doesn't leave any gray area for personal expression, and treats the language as its own language rather than clone - you - a - c++ - java - javascript - ruby.

Would I rather everything move to Mill? Yes. Is that ever going to happen, nope.


Some of the twitter ecosystem works fines with typelevel style. Finch, in particular, comes to mind


An evergreen conversation:

"I love C++." —Alice

"Oh? Which C++?" —Bob

Every successful C++ team defines its own C++ subset.


This is why I can't figure out how for the life of me to learn C++ effectively for professional use.


In reality it's not really like that. Yes different shops are going to be using different C++ features in different ways but they usually share some common "core" of the language.


One approach is to get a sample of code from the domain you want to inhabit, preferably including some from the very people you want to work with, and learn the subset of the language they use.


Also Stroustrup thinks that is the way to use cpp, his talks are all about 'do not use all of cpp, pick your features'. Makes senses in a way but it does lead to various notion of what cpp is.


That is exactly it. A company wide style and coding guide is a must with scala


A company wide style doesn't quite get it done. Is your style not to allow high kinded FP coding? (because it is too hard to understand). If so does that mean you don't allow libraries written in that style, like SLICK?

We had PHDs and very senior engineers, and they thought debugging common libraries like slick would take them weeks of work and they opted for avoiding that entire type of scala. But that is hard to do, because its a lot more than a coding style.


I do not think you do much high kinded Fp coding in slick other than "lifting" values from scala code to sql, though I havent worked with slick for in a while. I am not sure why they exposed internal types as we didnt really have any control over them


Perfectly said(!).


I may have a contrarian opinion: a new language, beyond certain level, does not increase productivity tangibly for a large enough team. Specifically, Scala does not necessarily offer more productivity than Java. It is a pleasure to write program in a language with more powerful features, for sure. It's just that bottlenecks of project are usually not language features, but core algorithms, system designs, meticulous testing, conflicting requirements that demand careful trade-offs, availability of robust libraries and frameworks, maintainability of the most complex part of the system, and availability of qualified engineers. Few of such bottlenecks can be removed by switching to a language like Scala.


Can't language features help solve some of those problems though? Improvements in static typing can help with algorithm implementations, reduce the amount of testing required, and make conflicting requirements more obvious.


I only witnessed from the outside, but saw a similar story play out. A client had bet on Scala for a new platform and made it about 6 months in before throwing in the towel and rewriting the whole thing in in Java. It's too hard to recruit and the tooling isn't mature enough.


I didn't run into problems with recruiting in general being too difficult. Plenty of people are interested in FP and are eager to do it professionally. But recruiting the right mix of people can be a challenge. Scala tends to attract people with an experimental temperament, and scare away more conservative developers. A team needs a healthy mix of both, though. It's the creative tension among different attitudes about how to write code that yields the best work in the long run.


Yeah, good luck when someone with experimental temperament decides to implement some of your crucial functionality with semialgebras. (that's what happened in my previous job where we run Scala). I'm happy I didn't have to debug any customer issues around that module. Also, I didn't enjoy constant bickering with my reviewers what makes a beautiful code or not. Apparently there are five ways to do everything in Scala and I always ended up picking not the one my teammates would like the most.


Yes. That's pretty much how the Scala project I was working on ran itself aground. Everyone got so caught up in flexing at each other that solving actual business problems became a secondary priority.


This is why I think the community is way more important than the language. In Python people are like, hmm, this is a hack, but it works for now, in Scala it’s, what have you done you savage?!?


It's also just really weird, when you step back and take a good long look at it, to see people getting worked up about hacky ways of doing things within the context of a language that implemented sum types the way Scala implemented sum types, and that implemented typeclasses the way that Scala implemented typeclasses.

Which, I don't want to be too down on Scala. Overall, I like the language. Scala's real weak point is its culture. There's a decent risk of cognitive dissonance when you try to wrap an ethos of design purity around a language that's always been a bit of a communal experimentation project.


This points to a culture problem more than the language itself though.


Interestingly, when we were working on the Java code, we didn't have this problem. Same people, same time, same git repository, different modules.


This doesn’t terribly surprise me. Community isn’t just the people but also the customs and idioms within the community. It’s probably similar to ‘code switching’ in verbal languages. People talk (and act) differently depending on who thay’re talking to.


Were semialgebras a right abstraction, or over-engineering?

What is helpful in situations like this is the motto "as simple as possible, but not simpler". During a code review, if you see something you think is too complex, ask the author: "How could it be made even simpler? If not, how exactly a simpler approach did not work?" It sometimes helps find an overlooked simplification.


> Were semialgebras a right abstraction, or over-engineering?

I think that's missing the point.

The criterion should be: is the code clear enough to be easily testable, easy to debug, and easy to evolve?

I will take a wrong abstraction with the qualities listed above any day of the week.


This is a good call.

The problem with wrong abstractions, though, is that they do not work, especially when you make the next step on the roadmap.

I would say that clear, testable, evolvable code with little abstraction may be fine. OTOH boilerplate and copy-paste prevents easy or well-controlled evolution. The abstractions end up inlined and fused in the code, instead of being made visible, and become easy to miss a case when a concerted change is needed.


In contrast I think that going all in on Java scares away people who like to experiment.


Be honest and give a short list of experimental approaches that you wanted to do and couldn't be done on Java. What did you want to do and couldn't?

And, let's ask Rich Hickey. What language was it he used to experiment with a new s-expression language? Why wasn't RH scared away? You want the brutal honest answer? Because he is smart and can grok the complexity of the Java in toto.

What sort of experiments can you not do on a virtual machine based language, with open byte code/vm spec, open class loaders, and compile and runtime instrumentation and meta- capabilities?

I used to do the switch the superclass at loadtime to experiment with adaptable programs. This was circa late 90s. Loads of fun (npi).

What scares many away from Java is that it is now a very huge mental object. But not wanting to admit this, they simply pass along FUD.


> I used to do the switch the superclass at loadtime to experiment with adaptable programs.

This probably falls in undefined behavior territory. What kind of errors did you get when something went wrong?


It is not undefined behavior. Bytecodes are modified via custom CL and superclass was swapped from e.g. Object to something else. As for errors, honestly its been 23+ years so don’t really remember. Naturally you can’t just swap any random class - it has to be correct e.g. can’t have an override method in the child that has no corresponding method in the super. And if memory serves, you can also play this trick using classpaths. Assume a stub package com.foo.Context which is simply an extension of Object, and derive components from com.foo.Context.SuperClass. You simply need to provide a different eponymous package for the adapted instance of the code.

https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-4.ht...


I've definitely experienced some scary bugs caused by runtime bytecode injection (from a tracing library). I didn't bother tracking it down further than identifying it was at the intersection of some slightly strange type and the tracing. Caused VM errors when this method was called but only when the tracing was configured.

Not saying it's not interesting to experiment with things like that, but there are non-trivial associated, if you ask me, to depending on it in production.


I'm pretty sure (but am hedging here :) that you would get any errors in the verification phase of loading a class. What you saw may have been due to bugs in that library you were using.

Anyway, this was before ASM and other bytecode engineering libraries. Today I would not directly hack the class def myself.


I thought it was understood that choosing a less popular language meant hiring people for fundamentals and then letting them ramp on on your language. Did the client choose Scala thinking they would just hire "Scala people"?


If you stick to a reasonable middle-ground of scala it really shouldn't be hard to teach. I do like the FP community but that style has gone a long way towards mythologizing scala as an impossible language to teach.

In my experience it's clearly considerably simpler than Rust or C++ to become productive in.


I think mixing the OO and FP is the crux of the problem. Type inference and subtyping (class based inheritance) don't mix well, and often require type annotations to help the compiler when types get somewhat complex. Eventually you develop an instinct for it, and it's not a problem, but the road to developing that instinct is littered with torn out hair. (edit: spelling)


I have found that languages that support both FP and OO paradigms its best to do things like data manipulation in FP, and use OO to encapsulate those processes and be limited to just passing messages to other objects.

Avoid inheritance. Once an object is instantiated, don't change its internal state. etc.


>Avoid inheritance. Once an object is instantiated, don't change its internal state. etc.

Just embrace Clojure then. You get all that enforced for you, plus the entire Java eco-system.


I'm not a huge fan of the Java ecosystem.

Clojure does look really nice and Diatomic looks pretty slick.

Have you used Clojerl? https://github.com/clojerl/clojerl


When you take away Java, the JVM ecosystem is actually really nice. It's ok if you don't like it, I'm just saying, I used to think the same, but since taking on Clojure I've completely changed my mind, and find the Java ecosystem one of the best out there.

Clojure's ecosystem is even better, it improves on the Java one a lot, simplifying most of the warts with the Java one, and like you've brought up, it goes beyond the JVM in having quite a lot of active dialects.

I haven't tried Clojerl beyond just REPL, but it seems quite complete, and the maintainer is very active. ClojureScript is also quite nice, if you prefer the JavaScript/Node ecosystem.

You can also use ClojureCLR if you like the .Net ecosystem better.

And finally there's Babashka which deserves a mention, if you want something more like Python.

You also get to play with Clojure derivatives as well: Janet, Fennel, and Ferret can all be picked up in a day if you already know Clojure.

For me Clojure was a great investment, and it's replaced all my needs in all areas: scripting, front-end, application development (mobile and desktop), command line tools, server side, big data, batch processing, ML, data visualization, etc. Only downside is you will exist within a niche, but that niche has everything I need.


nice! i'll check it out!


You lose static typing though, and that's a deal breaker with me.

And no, core and other half-hearted attempts at gradual typing from Clojure don't even come close to hitting that mark.


Yes this is the conclusion I've drawn. Often called "FP in the small, OOP in the large".

Higher-level organizational patterns in pure FP style aren't familiar enough to replace the ergonomics of just using a few classes and DI to make things modular. Maybe tagless final or something similar will eclipse it eventually, but for now it seems like using "anemic style" OOP, with functional in the small is still the best we have for mainstream development.


I agree you should avoid inheritance, but it’s the fact that Scala supports inheritance that makes type inference difficult (also null). Type inference in Haskell is much better, for example. I saw a great presentation on the topic by Thomas Wies. Here’s a paper he wrote if you are interested https://cs.nyu.edu/wies/publ/finding_minimum_type_error_sour...


This is my philosophy:

Data types are object-oriented. They are responsible for ensuring that their internal state is consistent, and nothing more. They may inherit if it makes sense, but it usually doesn't and most of the time aggregation is more appropriate.

Business rules are functional. There's just a bag of composable, pure functions that take the various data types and perform validations, transformations, etc.

External services, such as a database, are contractual. This would be an interface in Java or C# defining the required operations of the service. This allows manipulation of these external services during testing. Unit testing would mock them; integration testing would not.

Workflows or services are procedural. They combine everything else into an actual use-case. They read in a linear flow of what needs to happen when to meet the use-case.


Great breakdown. Seriously, I've had very similar ideas but never tried to build a coherent mapping like this. Thanks for writing it out!


Maybe the best approach is using a language with dynamic types ? (e.g: cloujure)


We chose Scala, had so many problems with the eco system and compiler and community, still love the language, it's the programming language I've enjoyed most, but glad we sold the startup.


Scala has its place... it's just not one that requires fast on-boarding (like a small startup where people constantly come and go). The success stories usually involve companies that are mature enough to spend time on design, training, and supervision (wix, for example)


>We really loved the integration of OO and FP, really miss the FP awesomeness honestly... but simplicity trumps doing either of those things well.

Agreed. I like the Scala language, but I can't imagine a scenario where I would recommend it for a project over Java.


> but simplicity trumps doing either of those things well.

That's the genius (?) of Go. They gave us what we didn't know we needed.

Software people love the "manliness" that comes with "serious", "real" programming languages. Maybe it's the desire to impress our peers with our intelligence? But in the end the only thing that matters is whether you can get your shit done and on time in at least a semi-working state.


I appreciate what you're getting at, but is Go really the epitome of a "get things done" language? It seems to suffer from some serious boilerplate issues even compared to something like Java.

As far as it being driven my machismo, I think that's part of it, but I'd be more inclined to blame it on the following:

* complex languages/mental frameworks function as an "intellectual trap" where people more inclined to be interested in intellectual things for their own sake start to lose site of the forest for the trees.

* FOMO that one must keep up to speed with all the coolest new developments in programming or face mounting irrelevance

* Elitism/self promotion where specializing in something difficult (and potentially keeping newcomers out) either provides internal or external motivation

I think some of these traits seem somewhat correlated with the "nerd machismo" archetype but I think the actual causes are multifaceted. .


Pretty much a noob at FP in general: Wouldn't f# solve the FP and OO integration?


I did Scala for ten years and've been at it with F# for a year and a half now. I do miss some very powerful features of Scala such as HKT and "typeclasses" occasionally but overall it works very well.

Sure, Scala is complex but I find that there are a few misunderstandings here.

Java, for instance, _requires_ that you use a number of supporting tools for it to be useful these days. You need a container, you need quite extensive testing and you need quite the advanced build tools. Sure, these have all been around long enough these days to not be what we think about when we compare Java to other things, but they're implicitly also Java. I can't speak for Go because I've not looked at it.

But Scala has strengths that means that you don't need to have _external_ tools and libraries for those things. That strength is types which means that the compilers is going to want to "pick a fight" with you to a very much higher frequency. That is a pain. The reason that we're here talking about that pain with Scala is that you don't think about that in your Java project anymore because you use heuristics and sandy heads instead.

Another thing is that I think that Scala gives you a big fat revolver to shoot yourself in the foot with. Java has a bb-gun so no matter where you aim it, you're not going to accomplish a lot.

Over to F#: I'm going to claim that the .NET toolchain is superior to the JVM ones. The CLI tools and paket runs circles around the POS that is SBT.


Huh, I use Java all the time without containers. You definitely don't need them.


> integration of OO and FP

OO and FP are paradigms. You can use any of them, or both of them, in any language.


You can use FP in Java, or OO in Haskell. That doesn't mean that they're equally usable paradigms.

I believe Scala had the goal of offering a high level of usability for both FP and OO. It failed despite an admirable effort, because of an incredible amount of language complexity.


I wouldn't blame the language, I'd blame the people who chose Scala.

The impression I always had was that Scala was a research project into how various programming language features could live together and interact, hence their everything and the kitchen sink approach. From an academic point of view I think Scala was a huge success.

That people chose to use Scala in production speaks more to how badly people hate Java and want an alternative, not to Scala's failings.


Indeed, Scala looks like a language research project, more like Perl and Haskell. You definitely can use all of them in production and benefit from that, but you need both mastery and restraint in using such a language.

Golang is the opposite: it takes the ideas from 1980s with some early 1990s (Pascal, Modula-2, Oberon; CSP; garbage collection) and executes them to near perfection. It even ignores some of the obvious faults in ergonomics (if (err != null) return err; everywhere) to keep things dead simple and reasonably performant. It's much harder to write incomprehensible code in Golang, even if you try, so lack of mastery and restraint would have much less catastrophic consequences than in a highly expressive language like Scala (or e.g. C++).


My issue with Golang is that I find my ability to write code to just takes so much longer (and often with more lines) than other contemporary languages like Swift. The rigidity is a double-edged sword.. it removes potential issues that could cause you to trip on yourself, but at the cost of productivity.


Maintaining Haskell code in my experience is s piece of cake. The community is really careful to avoid breaking changes, and when they can't be avoided, to introduce them in the most polite, gradual, convenient way possible.


Recent Javas made many QoL improvements to remove some of the complaints IMO. And there are more neat improvements coming (already there?) with 15 and 17.


Also, Kotlin is a rather nice and much less radical "better Java", well-supported, but mostly popular on mobile.


Kotlin is by far my favorite language-to-date. I moved my dev team to it years ago, it's all server side. We still have over a decade's worth of Java code to maintain, but anything new is in Kotlin (even in the legacy applications, we just add Kotlin). So nice. Makes me happy when other people get onboard and say "oooh yes, this is much better than our legacy Java". I was a HUGE Java proponent from 1999 to around 2010, when I started really getting annoyed that the language was missing so much that other languages had. Eventually we tried Scala, too complex to deal with in large teams to get everyone onboard, Clojure was a pain as everyone tried to be as clever as possible (and errors were awful, tooling was painful as not everyone wants to use emacs), then Kotlin came around and we never looked back.


Kotlin on the server is growing though and has the biggest library ecosystem to exist


Glad to hear this. There's really no reason it for it not to do well on the server.


Pattern matching is (will be) better implemented in Java. It's fully fledged compared to Kotlin's implementation.


When will the last-argument-as-block be implemented in Java?

This one brilliant feature allows for easy DSLs without macros, parsing, or otherwise introducing new concepts to the language.


The ironic thing is, Java now has records (case classes in Scala) and pattern matching (fully fledged Scala-like pattern matching is in the works). It's better designed than Scala, and will be faster due to better integration with the JVM.


> better designed than Scala

How so? Do you mean Java in general, or just the pattern matching part? Can you provide an example where Java is "better designed"?


Destructuring in Scala is allocation heavy, I expect it to be much lighter weight in Java.


Scala compiles to java bytecode. Scala can only be faster then Java, in that Scala can leverage all functionality of the JVM and on top provide extra smarts that Java cannot due to backwards compatibilty it has to offer.


This has never been true for non-native languages (Scala, Clojure, Kotlin, etc.) as far as I'm aware. They've always been slower. The JVM is designed with Java in mind and co-evolves with it.


One of the benefits of Scala's willingness to break bytecode compatibility all the time is that whenever Java/JVM implements a Scala concept, Scala switches to the Java method under the hood. Whenever the JVM gets Java-specific optimisations, Scala will use those. That's honestly pretty cool.


They have in practice been slower because they try to be higher level, more ergonomic languages than Java. There aren't many performance-first JVM languages yet I think (Fortress maybe? They cited JVM problems when they wrapped up though).


I might be a bit biased, since I've been writing Scala professionally for years.

While most of the article rings true, I wanted to share some counter-arguments:

1) A lot of the problems mentioned are only really noticeable if you use Spark, which really lags behind in Scala versions compared to the rest of the ecosystem. As someone who doesn't use Spark, I don't really feel much of the "Scala minor versions" pain.

2) While not great, cross-compilation/publishing is not that bad (there are plugins like `sbt-release` and `sbt-crossproject` that pretty much take care of it).

3) I mostly disagree with the "Difficult to publish to Maven" point. I'm not saying that it's easy, but I enjoy the fact that it's not as trivial like other languages. I would argue that this whole process makes [1] almost impossible, which is a plus for me.

1: https://medium.com/@alex.birsan/dependency-confusion-4a5d60f...


Yeah point 3 is underappreciated for sure.

> You need to open a JIRA ticket to get a namespace, create GPG keys, register keys in a keyserver, and add SBT plugins just to get a manual publishing process working. It’s a lot more work than publishing to PyPI or RubyGems.

It's a little annoying to have to go through that, but you only have to do it once per domain, and the turnaround from the people who manage the sonatype jira is usually pretty quick. In return the ecosystem gets a lot of protection from the kind of exploits you mentioned.


Meh, the complaints fall basically in two categories:

1. Open source libraries are sometimes not maintained which is absolutely true but also just a general hazard of the open source software ecosystem. The solution in most cases is to just not use open source libraries that are somebody's hobby project (which the author very sensibly recommends). But that is true whether you are using Scala or not.

2. Spark is weird. Which is true but that seems like it is more to do with the idiosyncratic nature of Spark than it is anything to do with Scala. That is, Spark development is sort of its own category and poses certain problems that are pretty specific to Spark and you don't necessarily run into when you are building non-Spark project in Scala.


Some responses:

1. Yea, but languages that are backwards compatible have libs that are useful for way longer. I can still use Java JAR files that were built with Java 8, and published many years ago. The open source libs get stale so much quicker in Scala.

2. I'd actually argue that Spark is less weird that some other dependencies. If you look at the cats README (https://github.com/typelevel/cats) there is this caveat: Cats relies on improved type inference via the fix for SI-2712, which is not enabled by default. For Scala 2.11.9+ or 2.12 you should add the following to your build.sbt: scalacOptions += "-Ypartial-unification"

More cats specific stuff here: https://github.com/sbt/sbt/releases/tag/v1.5.0-RC2

I use Spark a lot so used that for the examples, but think other libs cause even weirder maintenance challenges.


Fair enough, the whole SI-2712 debacle is indeed a source of issues well beyond Spark but in general it's just a matter of adding a compiler flag to sort it out. The other example is of course the whole mess of macros...

On the other side though, strict backwards compat creates it's own set of costs (that are often harder to quantify. Java backwards compatibility is great but it seems like it is also at least prat of the reason Java moves at such a glacial pace.


You don't have to add it, it just helps reduce boilerplate when using cats


> Scala can also bring out the weirdness in programmers and create codebases that are incredibly difficult to follow, independent of the maintenance cost. Some programmers are more interested in functional programming paradigms and category theory than the drudgery of generating business value for paying customers.

This is the tragedy of Scala. The maintenance cost is bearable. In return you get a language that dares to shed legacy cruft and fix bugs. Trash fire code bases, on the other hand, are hard to deal with.


To be honest I'm also seeing this with Typescript for example. I have seen some much energy wasted just for the pleasure of using RambdaJS[1] with TS. It's probably even worse than with Scala, because at least with Scala the language is on your side to do FP.

[1] https://ramdajs.com/


I've used typescript with lodash/fp (a similar FP library) and I found it to be an absolute joy. Once the team decided to really use lodash/fp for data manipulation, and the basis for composable logic, we really start to move quick.

We came up with our own linting guide, and guidelines for writing extensible and composable code.

I've never coded in Scala, but I've shipped production code in languages that support multiple paradigms.

Conventions are important. Things need need to be agreed upon and done in a certain way or your code base will become a tangled mess.


I've been on C# for over a decade and I'm still very happy with it. I think it's highly underrated in the startup space.

When helping start my current startup, one of the founders asked why I chose C# as he was asked by others why we'd pick that language/stack as it's not common for SF startups. I told him that it's less about the language and I could do just about anything with it. It's turned out great and of all the things to worry about, the language hasn't ever come up again.


Agreed - the startup I work with now uses C# and its been very solid.

The tooling is good, and the lang has never gotten in the way of getting work done.


C# is definitely not sexy, but it is well documented, and there is plenty of help. I don't see why more companies go this route (not just C#, but any language that fits this). Boring is good when you have a short timeframe and it has to work well with minimal fuss.


"Boring" languages almost always equal more boilerplate.

Personally, I'll take an expressive, GC'ed language, but write "boring"-ish code when I'm on a short time-frame and I have few developers.

EDIT. Just realised this sounds needlessly antagonistic; the point I'm trying to make is that "boring" is too coarse a measurement. Java 1.8 is more "boring" with it's required explicit type declarations, but I would /always/ pick a newer Java version to avoid the boilerplate.


In my experience, it's easier for many developers to blame the tools and language rather than admit the dumpster fire of a codebase was entirely their doing and their responsibility.


fentanyl is 50-100x more potent than morphine. Its still the user's fault for overdosing, but overdosing is easier to do with fent.

Pepsi is more potent (sugar) than an apple, easier to get fat. Still the consumer's fault.

My point is the choice of language may require more energy to enforce better code practices. Scala is so feature packed that it makes it easy for a developer to... overdose.


Counterpoint. A focus on type-safety makes Scala less likely to suffer from too-clever-by-half solutions. Sometimes a group of developers will be smart enough to create an overly-complicated solution that still type checks and works (see the original Slick library...) but for most normal teams, the compiler is a guard against overly ambitious type magic.

On the other hand, I've seen my fair share of too-clever uses of runtime reflection (Java) or weird metaprogramming (Python/JS) techniques, except they were not only too-clever but actually just didn't work and would fail miserably at inopportune times.


A more complicated language needs more skills and more skills mean you are inexperienced in at least some of those skills and inexperience causes dumpster fire codebases.

When I started software development I had "hammer and nail syndrome" for absolutely everything. I figured out something new and then used it everywhere. Experimenting in this way is necessary but you absolutely want to avoid it in a production codebase. Over time you have run out of hammers and nails to play around with. You've developed the ability to judge where and when a specific hammer is most effective and when it shouldn't be used at all. Scala offers you a wide variety of hammers and nails. You'll have to experiment a lot.


It may be reasonable to think however that some technologies are more conducive to dumpster fires than others.


I’ve seen lots of projects succeed in Scala, and I’ve seen some fail. The latter was usually caused by writing spaghetti code crap instead of language complexity itself. Obviously some programmers blamed the language, but the harsh reality is that competent engineers can make things work beautifully.


Is the Scala language simply too big? I tried to learn it once (years ago) and quickly found myself overwhelmed by the number of concepts. I remember feeling like 80% of power of the language could have been achieved with 20% of the complexity. The latest Java releases are somewhat proving this with functional interfaces and records.


A lot of the problems with Scala echo many of the problems that people used to write about Perl codebases minus specific language quirks like typing and objects. The problem with complex, rich languages is that it doesn't guarantee anything about the wisdom of programmers to pick, choose, and abuse from the features that wouldn't be easily legible to future maintainers.

Somehow Scala has a lot of the issues that I feared would have impacted Typescript by now (yet hasn't) so I'd be interested in comparing and contrast why Typescript isn't perceived as having the kinds of issues Scala does when on paper they sound similar in features and technical challenges.

Having worked on a lot of legacy codebases both creating and inheriting I'm quite interested in practices that balance rapidly prototyping and deploying systems and gracefully degrading when projects lose people and funding. It seems tragic to see so much of the work of our lives thrown away so fast.


From a quick google search it looks like ts/js doesn't let you define custom operators like Scala does. Scalas implicit stuff is messy too. The original article mostly talked about libraries though


Scala's operator overloading and implicits are known sharp edges (and don't get me started on XML literals) but the article doesn't mention those as hindrances for maintainability. Package dependencies are not necessarily a language issue and become more about conventions of its community. Comparatively, people aren't complaining about dependency problems in the Clojure, Java, and even Kotlin communities because everything's mostly been solved through the Maven dependency conventions that can depend upon JVM and compiler versions, too. In contrast, I'm not seeing much mention of this issue in Typescript except with the typesVersions and the Node ecosystem is mature at this point with known compatibility mechanisms.

So I'm here scratching my head at the point issues denoted in the OP wondering if I missed something critical about what makes Scala so different in its ecosystem conventions that a community couldn't fix over years. Even Node.js managed to work something out via Yarn


Lack of correct leadership.

Java, Kotlin, C# and TypeScript are led by companies that want usage. Scala was and still is primarily an academic exercise. Same problems as with Haskell - the leadership are paid to add random optimised-for-sounding-clever ideas to the language, not design a language in the gestalt that optimises for user success.

In particular TypeScript is not a research language and therefore the documentation, marketing, leadership etc doesn't strongly emphasise convoluted FP type theory. That means the sort of people who really want to go wild with that stuff stay away.


Scala's origins at EPFL under Odersky does give it a disadvantage given the history of academic origin languages in industry. OCaml was made at INRIA and it seems to have suffered the fate of becoming a niche language and toolchain for industry programmers. The Limbo programming language from Plan 9 didn't get very far until it was re-adapted and re-imagined as Go which has obviously gotten a lot of adoption from industry.

I'm not an either/or kind of person and don't see why a language suitable for academic purposes can't be broadly useful in industry, but I would also agree that leadership that has little background in certain use cases may not design and orient the language accordingly.


There are certainly folks that argue the language it too big. Here's a great talk arguing that a lot of language features should be eliminated and Scala should be made a functional programming language: https://www.youtube.com/watch?v=v8IQ-X2HkGE&t=1s&ab_channel=...

I'm personally in the camp of "keep Scala huge and weird and multi-paradigm, but just make less breaking changes and focus all efforts on making the ecosystem more stable".


Worth noting that the presenter of this talk has done a complete 180 on subtyping as he's now using subtyping and variance in the core of his flagship library ZIO. He realized it was sometimes better to use the language's strengths rather than denigrate them for "ideological" reasons.

Also worth noting that Scala is not a "very big" language in any sense that I know of. It has a lot fewer features than C#, for instance. And probably a comparable amount to TypeScript.


I've been at a Scala shop for just over a year, and it's certainly the case that there's a lot of overlap between language features. Personally I prefer having to make judgement calls as needed, rather than being shoehorned into a one-size-fits-all approach (e.g. like Java's class-based, single-inheritance OOP).

One thing I find encouraging is that Scala features may become deprecated (either explicitly or de facto) if an overlapping feature is added that works 'better'. Examples I've come across are Manifest getting replaced by TypeTag and implicit coercions getting replaced by implicit classes.


Had the same feeling many years ago when i wanted to learn it. Bought the book of some the main creator (?) behind scala and started to read and experiment with on a weekend. But every chapter was filled with many completely different ways on how to solve something. In the end i decided that scala looks really great, but will be really problematic in a team if everything can be done on so many different styles.

It looked for me like someone wanted to add every existing language into one single language.


I don't see this as a particularly bad problem in Scala tbh. Any language that is mature enough to be used for a large project will include many different ways of solving common problems. I'm old enough to remember listening to J2EE developers arguing endlessly about which GoF design patterns to use in which situations, usually resulting in using roughly all of them in one way or another so everyone could feel included.


But with the old GoF debates all of those patterns were built on a very small set of primitives. In fact, that's why the patterns were necessary! Scala combines (at least) 2 major paradigms into one language.


Have you tried C++, Rust, Java or any other language? They all at some point in the development cycle become a maintenance nightmare. I will let you on a secret, we software developers create problems and then solve those problems. That is how software developer jobs are retained :)


I have no complaints about Rust after it released 1.0. My oldest projects still build fine with the latest compiler (unlike my Node.js projects.)

When I read the article I was shocked that 2.11/2.12/2.13 could be breaking changes that prevent existing libraries from working, and so badly that they take years to catch up.


I maintain open source libs in Python, Ruby, and Scala - the Scala libs are a lot more difficult to maintain.

Certain languages prioritize backwards compatibility and maintainability over new features / complicated programming stuff. The maintenance burden is dramatically different for different languages in my experience.


They all have their tipping points. They don't all herd you toward that tipping point at the same pace.


I mean that isn't the argument.

Scala's problem is no unified paradigm is proposed or accepted at community level. 'Dumb' languages like Java/Go limited the style choices one programmer could take within the language, but Scala seems to encourage the opposite.


Software engineers solve simple existing problems by creating complex new problems.


>Python projects that are built with Python 3.6 are usable in Python 3.7 projects for example.

This has not been my experience in the Machine Learning space.


for most simple HTTP webservers, python maintains somewhat OK-ish source/binary compatibility between python-versions. But my experience with scala was things breaking between scala-versions / waiting for some library to compile with next scala-version...

I'm surprised why scala couldn't solve it better than python -- I mean, scala's a compiled language, so it should have more wriggle-room...


Scala’s type system doesn’t map perfectly to that of the JVM, so they had to bend the latter to implement scala. That’s similar to how C++ compilers bend the rules to confirm with C linkers (https://en.wikipedia.org/wiki/Name_mangling#C++), but much more complex because of the richer type system of the JVM.

Scala developers aren’t willing to freeze that mapping, partly because they found out better ways to do such mappings, and partly because they keep changing the language, changing what was the best way to do that mapping.

I think it’s easier for interpreted languages to keep their internals compatible. They are willing to give up some speed for convenience, so even if they think “I wish we had done that differently”, the pressure to change it isn’t that high.

They also keep more metadata around. In some cases, that enables them to discover “this is using the old way to do Foo”, and fix that up to use the new way.


it is intentional. scala maintainers want to keep the language evolving faster than python/java. if you look at scala3/dotty, it actually is a very different language than scala2


Python is distributed in source code and it sounds like Scala dependencies are distributed as JAR files. The compatibility problems when moving compilers are not entirely unexpected, I'd say.


but scala's cousin kotlin doesn't seem to suffer from this a lot... kotlin library-packages don't have kotlin-version to their download link... maybe I'm missing something?


Kotlin is a “better Java”. Scala is sort of a tug of war between a better Java and a worse Haskell.

More importantly Kotlin is designed to be as straightforwardly compatible with Java at the bytecode level as possible. Scala isnt.


Could you provide any specific examples? I've never had any problems swapping between versions but I don't do anything in the ML space.


Python 3.9 was released on October 5, 2020, but TensorFlow still doesn't support it. But that's an aberration and most of the time and for most packages minor Python releases doesn't break them.


I feel like conventions would solve this problem. There's lots of programming languages that cater to multiple paradigms. Ruby and Javascript come to mind, conventions really do help when dealing with specific problems/frameworks.

I'm not sure I agree with OPs blog post.

This line specifically stuck out to me. "Scala should be avoided for easier problems that don’t require advance programming language features."

What are advanced programming features? Isn't this just a matter of where you are coming to Scala from?


Some of the worst and unmaintainable code I've ever seen was written in Python & Java.

While some of the cleanest codebases were written in Scala and Javascript.

The problem is not the language, but the team & overall approach to development (rushing to deploy bad code because your manager said so; lack of code review).

Tooling can be better, but it was massively improved over last 2 years


I have seen quagmire in several languages at various small and very large companies. So many withered and bloated Java apps that is hard to maintain.

And I agree it is not the language, it is the stewardship. Who they hire, how they manage their tech, culture, pairing/mob, quality over speed etc. And the reality that all codebases rot, at different speeds depending on these factors, but eventually they all rot.

I have seen well maintained large Scala architectures, at very large organisations because of great stewardship of the projects and people.

And I have seen car-crash bundle of apps in Scala because they hired rockstar developers that incorporated every shiny feature continuously leaving it unmaintainable by the rest. Or by recent Java converts that were not mentored well enough so still writing Java code in Scala.

I love Scala but with bad HR processes and weak leadership it will ground to a halt. And they will blame the language not themselves. But the same would probably have happened with any language.


I've written a ton of code in a bunch of different languages, and you're exactly on point here.

It's got everything to do with discipline and time spent on creating and keeping a codebase simple and well factored. If you don't have a team doing that, you're going to have problems.

Scala is a sharp tool, in the wrong hands you're gonna slice your toes off. In the right hands, you can create simple, elegant solutions to challenging problems.


Not to brag but I recently ported our legacy java code base written for Java 1.5 to Java 8. The only change we made was to swap the jdbc jar as we also upgraded our Oracle DB.


8 to 11 can be a bit trickier, though. Subsequent upgrades should be boring again, but 8->9 can get interesting.


I just upgraded a spring boot app from 8 to 15, and it was seamless. It's been long enough now that most major dependencies just work with 9+ again.


Trying to upgrade stuff like Spring from 4.3 because you need 5.1 to run on Java 11+ is where we've felt the pain the most, but then we were doing some awful Spring stuff.


Damn dude, nice work. We've going 8 -> 11 right now and it's a pain in the butt.


What pain are you having specifically?


A former co-worker wrote a saga about trying to use modules in today’s Java ecosystem: https://www.theoryofgeek.com/articles/fomo-java-module-editi...

TL;DR: anyone doing it wrong anywhere breaks you, and it’s only a win if you feel a need to rip pieces out of your JVM.


> Difficult to publish to Maven

No more than Java, but also why most Scala projects publish to Bintray as it much smoother. Oh wait...

https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocen...


Java, you just have to publish a version, right? My understanding with scala was that many of the versions are not compatible with each other.


I feel this one. I really liked the ambition and ideas of Scala, and after I took Odersky's own Scala course, I was all charged up to use it. But so many things about it end up being a giant pain in the ass that it just wasn't worth it to me for general-purpose coding.


The problem is that functional programming is a really good idea but nobody has figured out (yet) the specific industry applications where it makes possible new things that weren't possible before. Mapreduce was a big win but you can do it in java. Spark was a big win but you can drive it from python. How do you make a better CRUD app in Scala? Serving http requests but with monads is not the answer.


You could say the same about OOP though. Just like FP, OOP doesn't make new things possible, it just makes some things much easier.


java succeeded because of managed memory, which made it possible to build systems that didn’t randomly segfault by default


Scala has a Lisp-like quality in that I feel like I should love it. On a certain, 30,000-ft level, I do, but when I'm actually trying to solve a real problem I usually end up just doing it in Java and then "translating" it back into Scala.


> Scala has a Lisp-like quality in that I feel like I should love it.

This is the complete opposite of my experience.

I love Clojure because it enables me to build large / non-trivial applications after learning a minimal set of rules.

Scala, on the other hand, appears to have a relatively large number of syntax rules and special cases.


I'm curious, what is the lisp like quality?

I like scala. I haven't used it for anything big, though. And hard to argue that it has an interesting approach to discipline.

Lisp, though, I like to use. I'm not as impressed with myself for getting something working in it. I am impressed that all of my books have code still work.


I mean I feel like I should love Lisp the way I feel like I should love Scala - int he abstract I do, but practically neither is my go-to.


Fair. I didn't know if there was any practice overlap.

For me, lisp is bigger in the meta programming. Really pulls the covers back on how things relate. Not just in how to put the code together, but how to look at the data.

Scala was my first big intro to category theory. Helped me see relations in some higher ideas. Many, though, are hard to see a priori.


The issue i have with metaprogramming (lisp or elixir but also C++ macro) is that it is easy to forget, unlike most programming principle. It require you to twist your mind, and if you stop for two-three years, it require efforts to put your head back in that space.


I've found that as long as you keep it transparent and focus on what the shape of the data is, things go easier than you'd expect.

That said, a lot of things folks reach for with meta programming, I think should get a pass. The goal shouldn't be to make the code like a text. The goal should be to heavily leverage core data structures.


I think the more expressive language is that quality. Which I believe is the base of the problem. If you have a really professional team then it will make programming even complex programs a breeze. But this same thing is a catastrophe in the hand of average devs. Go’s design was pretty much this (though I prefer a more expressive lang than that). Java while not necessarily an elegant language, I think strikes a great balance between sufficient expressiveness and not-too-many ways to express things.


Some of this comes from a willingness to do it, though. With static imports, it isn't actually impossible to effectively add keyword like things to java. It is done far less often than it is in other communities, though. And I can't see why.


I don’t see any hard to reason about feature in static inports. If I must choose something, I would say that reflection allows for some spooky things from a distance, but it is not abused as often in my experience.


Right. It is the lack of. "abuse" that I can't really explain.

To be honest, the same people I used to see do heavy Reflection are the same that did heavy meta programming. Usually to the same end.

I do not have a hypothesis on why.


That's one of the reasons I prefer Clojure over Scala, it has rock solid stability, and a great track record of backwards compatibility.


Scala 3 should help simplify the language and reduce abuse of features like implicits:

https://docs.scala-lang.org/scala3/new-in-scala3.html


I used Scala for all my startups since 2015. Currently, we moved everything to scala ZIO last year. In a nutshell we love Scala.


Many people talk about Java, Go and similar, contrasting it with Scala, but I think it misses one crucial point. Java and Go are in many ways "done" languages. They evolve very little. Correct me if I'm wrong, but I don't think Go changed significantly since 1.x. Java hasn't been evolving almost at all until recently, and the changes introduced now are still conservative, and it will take ages before the ecosystem catches up and actually makes good use of what's introduced in Java 11+.

On the other hand, yes - Scala is in many ways a research language. It's got a number of novel features, starting from the blend of FP and OO, through implicits, ending with metaprogramming. The dark side of introducing features that other languages don't have, is that they sometimes need polishing or complete removal. That impacts the compiler (which evolves as well - compiling Scala code is much more complex than the rather straightforward translation of Java to bytecode) and high-level language features alike.

The bright side is that software engineering is very far from being a "done field" (or at least I refuse to believe it is such). There's a lot to be discovered as to how we can write code that is readable, performant and - yes - maintainable.

"Wait, but if you said Scala is a research language, is it safe for business usage?" Yes - while Scala has a research side to it, it also is a language used successfully in many companies. It has a lot of libraries that are maintained, the language authors pay a lot of attention to binary compatibility (with well-defined limits - you know what to expect!).

Businesses have benefited from using the newest available technology in many fields, and I doubt software engineering should be different. Of course, you shouldn't be reckless - but using technology from 20 years ago "just to be safe" isn't always a recipe for success.

Yes, you can build great systems with Java and Go. Same is true for Scala. And yes, you can build totally unmaintainable systems in Scala, but in Java and Go as well.

Finally, a side note about OSS. I don't think it's fair to expect anybody to maintain any kind of library in any language for free. You like the library, it solved your problem - great, you can use it for free. But if you want it maintained above what the library author donates, either through their time or through corporate sponsorship - well, you can't have any expectations here. I'm sure a lot of these migrations problems could be solved, given a reasonable business offer.


> Finally, a side note about OSS. I don't think it's fair to expect anybody to maintain any kind of library in any language for free

True! but why Spark & Flink, which are backed with business, didn't bother themselves to catch up with changes in Scala! they delayed their upgrade to 2.12, not mention 2.13! and as far as I know the 3.0 hasn't been discussed yet! Am I right?

In general I'm happy and thankful that I learned Scala! it shaped my perspective to combine programming paradigms and pick best practices from each one rather than sticking to a single paradigm. But for work that I wanna last long enough, I can't accept it, as there is no realm of calmness in view. Software Industry is a young industry and complex, like any other profession, and a developer need to think about lots of other things as well, not just the language and possible pattern/syntaxes it has to keep up with!

I think the value that Scala brings into industry is what that prevents it from becoming the rival to others.


> Software Industry is a young industry and complex, like any other profession, and a developer need to think about lots of other things as well, not just the language and possible pattern/syntaxes it has to keep up with!

Looks like you’ve chosen the wrong profession


Dear friend, I don't know about your experience on the industry, specially with Scala. My experience on working with the language for more than 4 years and being responsible for more than 15 components written in Scala, wasn't delightful. I'm still maintaining some components in Scala, and I have a clear picture of it means. Hope you'll have more successful experience with it.


> Scala libraries need to be cross compiled with different Scala versions.

This seems like a big turn off. Question to any Scala dev's out there. Does this get in the way? And does it make upgrading Scala versions difficult?


Scala minor versions are not binary compatible. So if you try and use a library compiled for 2.12 in a project that is using 2.13 it won't work.

In practice though it is not a major issue. It is easy for library authors to cross-compile for different versions and SBT handles fetching to correct binary dependency for your project. And in general, binary incompatibility doesn't mean source incompatibility. So generally uprgrading your projects scala version is as simple as changing the scalaVersion in your build.sbt. The exceptions are:

1. If you have a dependency that hasn't been cross-compiled for the new Scala version yet. So if you are trying to upgrade as soon as a new Scala minor version comes out then you may have issues if you have a lot of dependencies.

2. If you are trying to publish a library compiled for multiple older Scala versions then you can run into source compatibility issues. For instance, if you are using a method/interface that is new as of Scala 2.13 you won't be able to cross-compile to Scala 2.12.


Upgrading Spark from Scala 2.11 to Scala 2.12 was a huge effort. See here for more discussion: https://contributors.scala-lang.org/t/spark-as-a-scala-gatew...

Some projects are easier to upgrade, but for complex projects, minor Scala version bumps can be quite challenging.


And the story goes on: upgrading Spark to Scala 2.13 https://issues.apache.org/jira/browse/SPARK-25075


Which is why at the end of the day I always pick the platform language for production code.

As long as the platform is relevant everything moves along without extra layers of tooling and idiomatic wrapper libraries for platform APIs.

Might not have all the bells and whistles, not so shinny, but it works and I don't need to care about my replacement having headaches.


As an old pragmatic, I believe for a new startup language of choice must be Java/C#.

   * Stable, Well understood language.
   * Rich ecosystem.
   * Fast enough.
   * Plenty of developers to hire from.
   * Scales well to large projects.
No other language/ecosystem ticks all the checkboxes.


If the list is meant to be universal, I would strike the last two from it, and clarify that "fast enough" means, "Fast enough for the problem you're trying to solve."

Working from the bottom: Scaling well to large projects is only a concern if you expect your individual projects to become large. I've worked on monoliths, and I've worked on systems that follow more of a Unix philosophy. In the former case, we had one project with a 7 digit line count. In the latter case, it was rare for a single application's codebase to grow to more than a few thousand lines of code.

"Plenty of developers to hire from" is often optimizing for the wrong problem. In many lines of business, domain expertise is a much more valuable skill, because it can only be developed through years of first-hand experience. A new programming language, on the other hand, can typically be taught to a reasonably skilled programmer in no more than a couple weeks.

Also, when you're just starting out, the single best programming language you can pick is the one that the people actually starting the business already know the best. They will not have senior mentors to help them learn the best way to use a new language.

Finally, "fast enough" is not a single target or a strict hierarchy. I used to work at a shop where we used Python, C#, and C++. Only C++ was fast enough, performance-wise, for some problems. Only Python was fast enough, productivity-wise, for other problems. C# covered the (large) middle ground.

It's also not a strict hierarchy. At my current job, I'm in the middle of a project to replace some Java components with Python. Java has not proven to be fast enough, performance-wise, for this particular problem, while Python gets us there with room to spare.


> "Plenty of developers to hire from" is often optimizing for the wrong problem. In many lines of business, domain expertise is a much more valuable skill, because it can only be developed through years of first-hand experience. A new programming language, on the other hand, can typically be taught to a reasonably skilled programmer in no more than a couple weeks.

This is a gross programming fallacy.

Yes, you'll learn the language at the superficial level. But learning the ecosystem, best practices, that takes years.

Unless your project is short lived or something trivial, in which care you won't care. But if it's going to live for a long time and will have a major impact, the decision to hire programmers that have 0 experience in your target language will bite you in the butt, years later.


Actually, Kotlin does. I know this because I did a startup and chose Kotlin for it a few months before the language hit 1.0 (bold move!).

The Kotlin ecosystem is the Java ecosystem. It runs as fast as Java, partly because Kotlin is basically (90%) a better syntax for Java. It scaled well as the project got bigger. And of course in the beginning nobody knew Kotlin but it didn't hold back hiring at all - you just take Java devs, tell them to read the language guide and they're 90% of the way to being Kotlin devs. Some code review to show them useful little tidbits in the stdlib and you're done. It wasn't a problem.

10/10 would use for a startup again (and will)


No they should absolutely not and the reason has nothing to do with the languages as such (which would be fine choices!) and all to do with people. A senior programmer that writes Java or C# in 2021 is not going to jump ship from his comfy overpaid position into your fancy space ship.

Why haven't you learned anything new?


As an empirical thinker...consider Ruby and Python (and I expect back end JavaScript will get in there soon enough)

https://charliereese.ca/article/top-50-y-combinator-tech-sta...


Go ticks all those boxes, including "easy to learn".


Does it? Does it have a production ready ORM (such as Hibernate or SQLAlchemy) or solid cross platform GUI libraries? (this is a big hit against C#, too, by the way) :-)


Go? Typescript? C++1x?


C++ is much more complex than Scala! Typescript is great because you can create back and front in the same language.


Python, Ruby, JavaScript


I am an order of magnitude more productive in Scala, compared to python. The compiler does so much for me, I can focus on just writing my domain.

Frankly, I think Scala is just good at exposing weak programmers and teams with poor discipline. I've seen far worse codebases in python where there's no disciple: all methods are public, monkey patch classes at runtime, etc.

I think articles like this are so common because Scala is easy to pick on. "Implicits are bad" is more interesting than "Implicits have a very specific usefulness", but Implicits are peculiar and unique to scala so the language gets lampooned. "Private methods make code more maintainable" may be just as true, but no one says Python is universally hard to maintain -- so long as the person behind the keyboard knows what they are doing.


This is so even at a low-level.

Look at the beautiful code examples in a Scala book and you might be seduced, but make a small change and there is nothing beautiful about it.


Interesting. I've had the opposite experience. Setting aside the question of beauty (which can often be pretty subjective), I usually find that scaling up solutions you might see in a libraries documentation (which are necessarily simple) to more complex "real world" solutions is generally easier in Scala than in other languages. I don't know quite how to describe why that is other than to say that the composition of primitives concepts in Scala seems to be more consistent and principled than in other languages I've used.

The only concrete case I can think of where your sentiment rings true is when you try and do some fancy dependent-typing thing for strict type safety and you end up having to right some (VERY) ugly type lambdas. But then again that has mostly gone away with how 2.13+ handles existential types and of course with the Kind Projector compiler plugin.


Yeah I’ve had totally the opposite experience. I would inherit dumpster fire Scala code based and found the language always gave me the tools I would need to apply refactorings with surgical precision, always maintaining some invariant (e.g. code compiles, test pass, etc...) for each commit.

By the end of whatever work I was doing, there was always the added benefit of the code base looking and feeling much better. Plus it’d take maybe 1/5 as much time to do a huge refactor compared to my experience in say Java.

I really don’t understand why people have such a hard time with Scala, other than its very different from imperative ways or doing things.


Does rust have this issue? Anyone with experience?


I did a lot of Scala some years ago and also ended up in sort of a ragequit. Nowadays I'm mostly doing Rust or Swift. Rust, of course, has a steep learning curve due to the lifetime & borrowing system. However, the errors are usually very easy to understand. In the rare case where they aren't, it is easy enough to figure out what's wrong by searching for the error message. One of the Scala issues I had was that the error was the compiler throwing up a stream of sigils because some generic requirement didn't hold and it was very very difficult to search for, let alone understand what was wrong. Granted, it's been almost 6 years since I stopped using Scala. The experience that broke me back then was trying to use Slick, a typesafe Scala ORM. I forgot the details but I remember it was astonishingly hard to use back then and I couldn't for the life of me tell what was wrong because the compiler error message was so cryptic.


The maintainer of Slick was in the last round of layoffs at Lightbend. Not sure who maintains Slick these days.

https://news.ycombinator.com/item?id=22854494


Yeah slicks errors were terrifying back in the days. They fixed both slick and scala to simply tell you which implicits were missing


Yeah, it has gotten tremendously better in the last couple years when it comes to cryptic compiler errors. But even then Slick was a particularly egregious example. I had many WTF moments using Slick a few years back.


I love Rust, but it always looks ugly.


It's the semantics that counts.


Wrote 100k+ lines of Scala, now write Rust. It's very different, Rust has much more noise (but I like Rust as a language a lot) and doesn't look beautiful.


In my little experience with rust, things can get ugly very quickly, but it depends very much on the task. I wrote a game server monitor, and a fairly extensive command line calculator in rust, and that code for all those looks quite fancy. I wrote a tool that converts a lexicon to a trie and one that can do error correction using that trie, and that has some quirks. I also tried porting an Earley parser from C, and that took a nasty turn, since it relies a lot on pointer sharing.

I have more experience writing in Go, and I think the code for all would look similar-ish. Go is a bit of a grey language, but it gets the job done without a lot of refactoring and restructuring.


Rust was a surprise to me. I didn't expect to like it (I'm a Kotlin person) but I ended up pretty happy with it overall.

I'm happy to pay the complexity tax associated with the borrow checker. You get outstanding performance and you usually don't need to dive deep into lifetimes to get some basic code running.

The language itself is reasonably pleasant but is missing some key features that really annoy me:

- No overloading

- No default parameters

- No named parameters

- Awkward constructor syntax which causes a lot of copy/pasting boiler plate

But overall, I'd say Rust is safely in my #2 spot, with Kotlin a bit ahead at #1.


I think about scala a bit because the second biggest chess website (lichess) is open source and written in play framework/scala. Was interested in learning some scala to help contribute since I play on the site so often

Wonder if they've run into these issues with the language though


They almost certainly run into these issues, but like most of the community, they probably think the cost is well worth bearing, given the advantages the language gives you.

Once you're used to the way it works, and if you're using well-maintained libraries, I find that it's really not such a big deal. Though it's always awkward to have transitive dependency conflicts, and those are best avoided if possible.


I have only marginal experience with both but.. isn't this the case also with java nowadays?


Not quite, java binaries are backward compatible (but they are not forward compatible). This means that as long as your runtime and build binaries stay relatively recent (by that I mean java 8 and up at this present date), you won't get any compatibility problem.


Don't have a lot of Java experience, but think it's backwards compatible, even across major versions, whereas Scala isn't even backwards compatible for minor versions.


Minor versions are not binary compatible but in general it is trivial to cross-compile for different Scala versions if you are writing a library and SBT makes it pretty simple to use the right binary for your projects Scala version. As the author mentions this sort of breaks down when library authors drop support for older Scala versions. In my experience though that is primarily an issue with Spark being way behind in Scala version support.


"Open source libraries are often abandoned, especially in Scala."

Is this claim backed by anything?


I use Scala for exactly 1 thing: Spark. No intention to use it for anything else.


> Publishing open source project to Maven is way more difficult than most language ecosystems.

Maybe this is a feature, not a bug. I won't call out the language/repo, but I was able to gain admin access for a library my company published with just an email from my company's domain. There are major security issues with "easy." There are also quality issues. You're more likely to publish something if you think it has value; I've published work to Maven Central, but only the high-quality, reusable work.


Would have to disagree here. Just raising the bar higher for publishing at all doesn't increase quality of libraries available. Consider how open source in general really became much more the norm after Github made it trivially easy to publish and collaborate on code.


Also Scala is not (yet) bootstrappable and the first binaries depended on proprietary software.

https://bootstrappable.org/projects/jvm-languages.html https://bootstrapping.miraheze.org/wiki/Bootstrapping_Specif...


Now that's an interesting thought. Since someone was already working on a bootstrapping compiler, I suppose that's the best way, but since the JVM is so high level in general, and doesn't normally have many optimizations at the bytecode level, I almost wonder if it wouldn't be easier to start from one of the early compilers de-compiled into Java instead.


A de-compiled compiler is not something software freedom purists like the Bootstrappable Builds folks would touch with a 6-foot pole.

Their approach is to write secondary smaller implementations of the same thing in other languages and use those to build the larger thing. The seed of their bootstrap less than 1000 bytes of hand-written machine code (not assembly), and they are working on eventually reaching a full Linux distro.


I use Scala for e.g. running Hadoop map-reduce jobs or building REST endpoints. I’d never write a reusable library in Scala though for the reasons pointed out in the article, binary compatibility is not something Scala cares about.

I’m never going to use a library written in Scala for the same reason, plus chances are I’d have to spend hours fiddling with SBT to rebuild it at some point.

Scala (with Maven) at the top please, then Java all the way down thanks.


Well, in my humble opinion, I believe most people that choose scala, thinking of it "the better java" usually gets frustrated, especially when they start off using the "cool" stuff people say on twitter.

Tbh, I've been programming in scala for 5 or 6 years now and have never worked with "proper" FP libraries like cats or zio. All the companies I've been working with, have used the lightbend stack, so basically play framework, which is easy to learn for someone coming from java/spring and the most different, Akka streams.

One day one of my java colleagues started an argument saying scala was too complex etc etc. I asked based on which code? He mentioned scalaz. So tell me what the hell someone who has never programmed in scala before was doing looking at scalaz code? No wonder why a novice would think of it.

The bottom line is that if you are new to scala and/or has no experienced developer nearby but still want to try out scala, just stick with the standard library and use play for your rest services. When you are confident enough with the language, if interested, have a look at the FP libraries.


I've been looking for a video/presentation from one of the main Scala contributors before he left the team. I think I saw it about 8 years ago. It was highly critical.

One of the main slides the stood out was discussing something like 20 or 30 different kinds of "types" in Scala. I cannot find it anymore, it may have disappeared with the political "drama" I've heard happened within Scala that included attempts to blackball people.

I've never personally coded in it, I use groovy on the JVM and it satisfies my needs but it has nothing to brag about in terms of avoiding spaghetti code.

It has all the same problems of bootstrapping any language that isn't a "X but distinctly better/easier/more features/fixes major problems in X". A big big lift.

I think only Rust is a language that is sufficiently different that has any real long term chance.



After being a big fan of Scala about 10 years ago, I think that Kotlin is better suited in almost all cases. Kotlin is much easier to learn (especially if you know a similar language like Java, C# or TypeScript), much easier to read and understand and has much better long term stability. The support in IntelliJ is also better. Scala causes performance issues from time to time in IntelliJ and there are some language constructs that the IDE doesn't really understand in all cases.

JetBrains build Kotlin as a programming language for industrial projects - and it shows.


But Java had adopted about 75% of scala’s features from 5 years ago. Mostly with some quirky limitations and bloated syntax. What exactly makes it easier for people to understand is beyond me


Because less is more.

What makes Java and Kotlin more popular languages than Scala is that both Java and Kotlin are being extremely deliberate in choosing what features they include and more importantly, which not to include.

Scala includes everything by default because it allows the EPFL to submit a lot of papers to conferences.


Why do you think Scala has more features than Java/Kotlin?

I think it's the opposite. For example null: where Kotlin has a special operator and semantics, Scala simply has the "Option" type in the std lib.


Do you seriously think Scala has fewer features than Java and Kotlin?


But there is no such thing as Java. It is Java plus spring plus guava plus something else for immutables


I think the point is that comparing Scala the language to Java the language is apples-to-oranges. In general you are using Java along with a bunch of libraries (or Scala plus a bunch of libraries) so the question is whether Java plus all the libraries you are using is less complex than Scala plus all the libraries you are using. And the answer is not nearly as obvious as many people tend to assume.

For example, the most common thing people point to when talking about Scala's complexity is implicits. And that's fair because implicits can be hard to understand and can be used in wildly inappropriate ways. But I have found that what implicits are mostly doing (at least when used correctly) is taking stuff that would be implemented in Java with some crazy runtime reflection scheme and making it a compile-time construct. So then the question is not "are implicits complex?" but "are they MORE complex than the equivalent implementation using reflection?" And on that question I would say absolutely not! Runtime reflection is (to me at least) much more opaque, error prone and difficult to debug.


Implicits are just typed globals that you have explicitly ask for. Scala’s flexibility is the problem, we make scala code look like Haskell, or strange Java, or Idris. Very few people actually write scala sticking with their own pl linguistic baggage instead


All languages have their libraries and design patterns, not sure how your comment answers my question.


I would say so. The language specification of Scala is much shorter than Javas.

Having less features does not mean the language is easier to learn though. Scala has less features, but the ones it has do work well in combination with the others and are very general.


The specification of Brainfuck fits in two paragraphs, according to your definition, that should make it a very easy language to program in.

The fact that Scala's features work well together is a myth that's been debunked over and over. Just look at the number of semantic meanings for "implicit" or the underscore character.

Scala is just what happens when a language just adds every single feature under the sun with very little care for user productivity.


Scala is one of the more misunderstood languages, partly due to its higher learning curve compared to most language.

The implicits you mentioned are a good example. It is one feature and consists of two parts:

1. Implicits at use-site: they define that a not explicitly provided value can be filled in implicitly by the compiler

2. Implicits at definition-site: they mark those values that the compiler is allowed to fill in at use-site when a value is not explicitly provided

That's all there is to it and one without the other would be utterly useless, so it really is _one_ feature.

But that one feature can be used for multiple things and I assume that this is what you meant. For example, they can be used for method extension syntax (which is a distinct feature in Kotlin) or derivation (e.g. serialize a structure into json, which in Java is usually done with reflection or annotations) or ad-hoc polymorphism (emulating typeclasses similar to Haskell).

Mind that these use-cases were not foreseen when Scala was created! But because implicits are such a powerful language feature, Scala can now have method extension syntax without any changes in the language (well, besides some minor optional syntactic sugar, but that's it).

Java cannot easily have this syntax and Kotlin only has it because it was added from the beginning.

Implicits are a beautiful example how a well thought and flexible language feature covers multiple features of other programming languages at once.

The underscore character is a different thing, because that isn't really a language feature, it is syntax. But I agree that it is overused in Scala and is quite confusing in the beginning.


Wait, so is it that Scala has too many feature or too few?


Yeah, you would think that obviously useful features that were added in a principled way up front would in fact be better than features bolted on after the fact.


I am really a beginner of both Scala and functional programming. I took the first Coursera course on Scala, and I got really frightened when I noticed that you can express many things in either the functional or in the object-oriented paradigm. One example is pattern matching vs. polymorphism. When you mix up those ideas throughout your code base, it might become really hard to understand.

This is just a thought that I had, and I wonder if that's actually a relevant issue in practice.


FYI this is known as the "expression problem" https://wiki.c2.com/?ExpressionProblem


Wow, thanks! It's always good to see if one's intuitive concerns are actually well-known computer science problems.


I wouldn't trust my colleagues with it. I'd dread any tool that allows them to achieve even more over engineering feats.


It is ironic that a language named itself as Scala, doesn't really scale with the team sizes.


It was originally called Scala because the language itself could "scale"; the grammar was so flexible that you could use it to seemingly write new language constructs. For example `while` or `repeat` loops are expressible with functions, and it looks natural because dots can be replaced with spaces in many cases. Functions can have many special characters as names, and functions can be called in-fix, and so on.

People took these possibilities and ran with them, creating all sorts of wonderfully illegible DSLs. It was widely recognised as a bad idea.


why not spark is a maintenance nightmare?


The same complexity problem of Scala is where I see Rust going


Can you elaborate? Is Rust gradually becoming more complex?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: