Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> code is for and by humans

I think this is very much domain dependent. Generally the more important efficiency is, the less I find this to be true. And really, code is always for the machine at the end of the day.

Additionally, if an added code layer makes the code more difficult to debug, that's also making the code worse for humans. Note I'm not arguing for or against whether that's happening in the case of OCaml/ReasonML, just making the point.



> code is always for the machine at the end of the day

On the other hand, code is read much more than it is written.

You may also regard efficiency as a necessary evil if the cost is readability.

Ultimately the winning strategy in this regard is achieving "free abstraction", making code that is both readable and efficient. Different languages aim at this. C++ has been best at this for most of history. Haskell and Rust are competing at this now as well.

OCaml isn't exactly efficient because of free abstraction, but because of its extremely simple translation strategies. OCaml's "flambda" compiler extension wasn't released as stable until 4.03 (2016), which means that before that, very little high-level transformation was happening.

My experience is that OCaml programmers care about low-level optimization, for good and for bad.

For example, in this StackOverflow response there are two examples of a function 'partialsums':

https://stackoverflow.com/questions/37694313/make-ocaml-func...

The one that Jeffrey Scofield provides is essentially more efficient because the accumulated value is a list, just like the end result, whereas my solution accumulates a tuple, which means every iteration of the fold involves boxing and unboxing that tuple. An optimizing compiler working at a higher level of abstraction might figure that out and re-use the memory of the tuple, but no sir.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: