But I do in general agree with your take on it :)
Every time I have heard this kind of claim (with modern languages), it turned out not to be true except for trivial code or straw-man bad code in the 'bigger' language. So if you have real-world examples that have real-world effort put in, I'd like to see them! (I would be happy to be wrong.)
5x-10x productivity increase would be huge if it actually existed; it would be so unstoppable that everyone would switch to the new really-great language immediately. That hasn't happened, which should be a clue that maybe the increase is not there.
Even a 20% decrease in cost of engineering would be so large as to be unignorable.
Obviously we're throwing around random values like 5x and 10x "productivity" but it's more nuanced than that. There's more than LoC that can be measured as "productivity": how about bug count and severity per LoC written, refactoring cost, performance, robustness, library support, setup time, etc. And many more metrics.
Metrics are valued differently depending on the programmer and the problem. Eg, who cares if my CRUD web app has memory leaks and crashes randomly, it's stateless! There's not one really great language because every programmer their own productivity priorities.
Often, though, LoC is used as a poor proxy for this multi-dimensional "productivity" value.
For your 5x-10x case, what is true is that the gain is genuinely possible, but it is just as likely to come from libraries than it is language constructs. Because of that it's often 5x-10x in a limited area.
When writing software, how fast you can type the code is rarely the limiting factor for speed of development – the architecting and consideration of interplay between components takes the bulk of the time. The grandparent claimed code reduction (which has intrinsic maintainability benefits) but made no statements about general cost of engineering.
Which is supposed to be what is simplified as LOC goes down.
So if a supposed 5x-10x code reduction (which I've never seen real evidence of) doesn't lead to 5x-10x productivity increase, how much increase is there supposed to be? Surely more than zero?
I don't think so. If you can express the same concepts with the same interfaces and functionality in 1kloc vs 10kloc, most of your time has probably still gone into figuring out the interfaces and connections.
> So if a supposed 5x-10x code reduction (which I've never seen real evidence of) doesn't lead to 5x-10x productivity increase, how much increase is there supposed to be? Surely more than zero?
Oh, certainly more than zero! Sometimes much more. But there's simply not a one-size-fits-all formula for the relationship between lines of code written and productivity.
Anyways, not really sure what you're getting at. Your original comment was that 10kloc isn't "big"; the rebuttal is that lines of code is a naive way of looking at system complexity, which is presumably what you mean by "big".
Pattern matching makes it easy to bind variables and validate their values in one line (so no needs for if statement).
In Elixir, you usually don't catch exceptions, you let it crash. So all the code to handle failures/errors doesn't exist, it is handled by OTP.
No need to write any communication layer since it's built-in OTP that implements location transparency.
These are kind of the low hanging fruits on top of my head. I'll point out that fewer lines of code doesn't automatically translate to increased productivity. Also, transitioning to a brand new stack is not always justifiable/possible even with the promise of significant increased in productivity, it's really not that simple.
Lack of functional composition
Mandatory types (for every trivial parameter object or "lambda")
Coping with mutable state in large amounts of effectively global data.
Excessive partitioning due to the size impacts of above hardships
Then we can start to talk about the lengths of the lines...
1. Thinking about the problem and modeling it properly is still hard, and language independent
2. The language still has its own quirks/failings that you have to work around
PS. My comment has nothing to do with Scala per se - I'm using it as an example.