Hacker News new | past | comments | ask | show | jobs | submit login

One interesting thing I've found with my limited experience with Elixir is that the code is very compact. What would often take 10+ LoC in other languages can be succinctly expressed as 1-2 lines.

But I do in general agree with your take on it :)




> would often take 10+ LoC in other languages can be succinctly expressed as 1-2 lines.

Every time I have heard this kind of claim (with modern languages), it turned out not to be true except for trivial code or straw-man bad code in the 'bigger' language. So if you have real-world examples that have real-world effort put in, I'd like to see them! (I would be happy to be wrong.)

5x-10x productivity increase would be huge if it actually existed; it would be so unstoppable that everyone would switch to the new really-great language immediately. That hasn't happened, which should be a clue that maybe the increase is not there.

Even a 20% decrease in cost of engineering would be so large as to be unignorable.


Concurrent, preemptive socket servers with robust failure handling are "trivial" in Erlang. I'd say 10x productivity would be an understatement compared to other languages. Writing FPU code? 0.1x productivity would be generous. It depends on the problem you're solving.

Obviously we're throwing around random values like 5x and 10x "productivity" but it's more nuanced than that. There's more than LoC that can be measured as "productivity": how about bug count and severity per LoC written, refactoring cost, performance, robustness, library support, setup time, etc. And many more metrics.

Metrics are valued differently depending on the programmer and the problem. Eg, who cares if my CRUD web app has memory leaks and crashes randomly, it's stateless! There's not one really great language because every programmer their own productivity priorities.

Often, though, LoC is used as a poor proxy for this multi-dimensional "productivity" value.


20% is a low enough bar that you can cite a well-known example: Objective C vs. Swift removes entire sections like headers and adds sorely needed typechecks. Our internal results are less code and fewer defects per line of code. However, a large number of people haven't switched. I don't blame them either; the transition (split code base) is painful and Apple makes things worse with language churn.

For your 5x-10x case, what is true is that the gain is genuinely possible, but it is just as likely to come from libraries than it is language constructs. Because of that it's often 5x-10x in a limited area.


> 5x-10x productivity increase would be huge if it actually existed

When writing software, how fast you can type the code is rarely the limiting factor for speed of development – the architecting and consideration of interplay between components takes the bulk of the time. The grandparent claimed code reduction (which has intrinsic maintainability benefits) but made no statements about general cost of engineering.


> the architecting and consideration of interplay between components takes the bulk of the time.

Which is supposed to be what is simplified as LOC goes down.

So if a supposed 5x-10x code reduction (which I've never seen real evidence of) doesn't lead to 5x-10x productivity increase, how much increase is there supposed to be? Surely more than zero?


> Which is supposed to be what is simplified as LOC goes down.

I don't think so. If you can express the same concepts with the same interfaces and functionality in 1kloc vs 10kloc, most of your time has probably still gone into figuring out the interfaces and connections.

> So if a supposed 5x-10x code reduction (which I've never seen real evidence of) doesn't lead to 5x-10x productivity increase, how much increase is there supposed to be? Surely more than zero?

Oh, certainly more than zero! Sometimes much more. But there's simply not a one-size-fits-all formula for the relationship between lines of code written and productivity.

Anyways, not really sure what you're getting at. Your original comment was that 10kloc isn't "big"; the rebuttal is that lines of code is a naive way of looking at system complexity, which is presumably what you mean by "big".


You find having to read 1kloc vs 10kloc the same?


I don't believe that's what was said. But in any case, the answer is generally no, depending on your definition of "read".


In Elixir you never have to write an iteration, no need to dance the "for-loop-with-counter" dance.

Pattern matching makes it easy to bind variables and validate their values in one line (so no needs for if statement).

In Elixir, you usually don't catch exceptions, you let it crash. So all the code to handle failures/errors doesn't exist, it is handled by OTP.

No need to write any communication layer since it's built-in OTP that implements location transparency.

These are kind of the low hanging fruits on top of my head. I'll point out that fewer lines of code doesn't automatically translate to increased productivity. Also, transitioning to a brand new stack is not always justifiable/possible even with the promise of significant increased in productivity, it's really not that simple.


It's more like there are a lot of things you get for free with OTP/BEAM that you can not get at all or would take a lot of effort in other environments. So in those instances it can be easily even 20 to 1 or more those are also far from trivial. From hot code reload to calling code on any cluster node to process supervision and on and on


Checked exceptions

Lack of functional composition

Mandatory types (for every trivial parameter object or "lambda")

Excessive Object-Whatever-Mapping

Coping with mutable state in large amounts of effectively global data.

Excessive partitioning due to the size impacts of above hardships

Then we can start to talk about the lengths of the lines...


FYI: criticism aimed at a certain "Enterprise" language / libraries / culture which is pretty much mandatory outside of the Bay Area.


Note that a decrease in LOC does not imply a corresponding increase in productivity. Our Scala code is easily smaller than its equivalent Java representation, but:

1. Thinking about the problem and modeling it properly is still hard, and language independent

2. The language still has its own quirks/failings that you have to work around

PS. My comment has nothing to do with Scala per se - I'm using it as an example.


Even if you multiply it by 3 it's a small codebase. And Phoenix does bunch of code generation from what I've seen.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: