Hacker News new | past | comments | ask | show | jobs | submit login

It would be interesting to know what they plan to do. It's easy to speculate l. It would be much more interesting to see ocaml on the jvm

The JVM is quite a bad choice for languages like ocaml as it doesn't support full tail call elimination, which is a fundamental aspect for the predominant style of writing ocaml. So you would either have to simulate the ocaml stack on the java heap (slow and will probably make ocaml/java interop ugly, defeating the whole purpose of porting to the JVVM) or rewrite most of the existing ocaml code in a ocaml-for-the-jvm style that is quite different from normal ocaml style, taking into account which tail calls can be optimized and which cannot. And if you decide to go down that road, you will not gain much you cannot get by using scala right now, the only missing thing is camlp4.

JVM tail recursion is done. It's available now and officially ships July 28th.

IIRC, the next JVM will indeed offer a new operation that allows tail calls (not exactly tailrec). However, this operation is incompatible with the Java security model (what you're allowed to do depends on what's above you in the stack -- this is used in many places in Java, in particular for anything related to accessing the screen or the file system, or more generally anything involving JNI) and with the Java exception model (exceptions contain the whole stack). Therefore, the operation will not be used by Java itself, and there are chances that using this operation in JVM languages without breaking compatibility with the Java stdlib will take some work, at least for the features that relate to security.

can you cite a source for this please

OcamlJava is a work-in-progress implementation of OCaml on the JVM, from Xavier Clerc of INRIA. http://ocamljava.x9c.fr/

There's also Yeti ( http://mth.github.com/yeti/ ), an ML for the JVM, though I'm not sure how mature it is yet.

I'm really loving Yeti. It's not very mature but it's a very promising language.

It does need more libraries but it's also extremely interoperable with Java/JVM. Very simple and easy to call out to Java libraries. Also compiles to native JVM classes (can be called easily from Java).

For anyone interested, I managed to write a Mustache implementation last weekend:


What is wrong with generating fast machine-code?

Nothing wrong with that. I believe that everybody wants fast machine-code compilers to exist.

However, experience shows that, at least in presence of sufficient RAM and for code that runs long enough, the combination of VM + JIT beats fast machine-code. For sources, see the Big Language Shootout (nowadays, JIT-compiled Java ranks above OCaml and even C in many benchmarks) or the Dynamo papers (JIT-recompiled native C beating optimized native C by about 5% on reference benchmarks).

In addition, having a VM + JIT allows for very nice tricks. For instance, recent versions of MacOS X use LLVM code that is dynamically targeted to either the CPU (generally predictable when you're building the application) or the machine GPU (much harder to predict) depending on available resources. If one thinks in terms of distributed/cloud resources rather than one computer, it also makes sense to target one specific architecture at the latest possible instant (e.g. during deployment, possibly even after work-stealing), to be able to take advantage of heterogeneous architectures. Also, the VM + JIT combination paves the way for extremely powerful debugging primitives, or for late optimizations that just can't be performed during compilation due to lack of information.

So, in the long run, it's quite possible that efficient machine-code will be relegated to niches (e.g. system development), while JIT-ed code will [continue to] take over the world.

Hadn't heard about targeting CPU or GPU dynamically. That's very cool. Thanks.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact