Hacker News new | comments | show | ask | jobs | submit login
Fjord – F# programming language for the JVM (github.com)
69 points by agurha 1455 days ago | hide | past | web | 60 comments | favorite



To save anyone the trouble, this project is totally empty yet.

It doesn't say anything about the ability of the owner to port F# to the JVM, but just know that it is just a readme, three almost empty java classes, and the beginning of an ANTLR parser.

So to answer other questions here, you can't even compare it to F# on Mono. F# on Mono works perfectly. The F# compiler and runtime is huge, and getting to parity will probably take at least a year to a very dedicated team.


Perhaps of more interest is Frege, which is basically Haskell for the JVM. Seems to be coming along nicely: https://github.com/Frege/frege


surely you could just run lljvm on the llvm IR that ghc emits and have haskell code on your JVM?


could you get the kind of adhoc library interop you get with other on-the-jvm languages like this with llvm?


probably not if you did it this week :)

That said, there is some very interesting work going on towards having a sane call out to JVM / .net code from haskell, though nothing will realistically be in a truly usable for in the next 12-18th months


I am not so sure if Frege would be of interest for people that look for an impure, strict language. A better fit for that would be Yeti.

And assuming that porting a language that is so deeply rooted in the .NET world like F# is very difficult, my guess is that going with Yeti and thus fully embracing the JVM will be a better choice. The more so as one could do this right now.


Very pleased to see this. F# is a really exciting language, hamstrung by being tied to the MS platform. Hoping we'll see more opensource F# projects as a result.


F# has an open-source compiler and runs well on non-microsoft platforms using Mono. How is it tied to the MS platform?


What's the quality of code produced by Mono? How does it compare with Microsoft's compiler? I'd like to see Mono C# and F# numbers compared against Microsoft's compilers. Will I get 90% of the performance?


There are games running on Mono (e.g. Unity C# scripts, Bastion was ported to Mono to run in Chrome, etc..). I'm sure the performance of Mono may depend on your specific application but many people have been using it without any issues.


Mono has been out for years, and I'm sure there have been great improvements. Does anyone have recent comparisons of it vs C# on Windows? I'm sure people must be curious.


the c# performance over mono is really different to the f# performance..while c# has a decent performance on mono, f# doesn't performance so well......


You can run F# on the old "stable" branch of Mono (2.10.x), but I'd highly recommend running Mono 3.0.x instead. The older version uses an older, simpler GC which works OK for C#, but which seems to have trouble with F# (which generates more G0 objects than C#). The new 'sgen' GC in Mono 3.0 is a huge improvement though, so you shouldn't see much (if any) performance difference between F# and C# apps.


Do you have any links with benchmarks? Very interesting.


Depends on what you're doing. Mono 3.0 has a stable llvm backend. For low level code it should run as fast as c modulo array bounds checks etc.


Some examples of why .NET is a Windows technology:

http://www.mono-project.com/Guidelines:Application_Portabili...


No. Those are examples of portability issues when your application depends on platform specific functionality:

Most code containing P/Invoke calls into native Windows Libraries (as opposed to P/Invokes done to your own C libraries) will need to be adapted to the equivalent call in Linux, or the code will have to be refactored to use a different call. <-- Means when you choose to directly link to Windows technology.

Registry <-- If you choose to use it. I haven't written any registry dependent code for the last ten years and don't even miss anything about it.

you should not assume the order of bytes <-- Not .Net's problem.

... the list goes on.


it's tied in the sense than the madurity over mono is a really long distance over the net one...I try use mono and f# in win xp..the answer was.."you can't...install w7 and vs2012"...that is annoying...the "mono community" are only a few guys compared to the .net support...so yes...it is really tied to .NET.....


Since F# was derived from OCaml, I think readers may also be interested in taking a look at the OCaml-Java project: http://ocamljava.x9c.fr/

(It's essentially what it says in the tin...)


Or scala, which is an acceptable ML if you ignore the OO parts :-).


It's really not. Scala is a very interesting language, and it incorporates many functional features, but it's not at all ML-like. In particular, type annotations are needed in many, many places where they would be superfluous in an ML-derived language.


As a recent migrant from Ocaml to Scala (due to stagnation of Ocaml), I have to say that you are quite right about type-annotations, but the style of programming that an Ocaml/F# person would be accustomed to is quite easily replicable using Scala.


yep..I change from f# to scala and actually know many developers doing the same change...scala in first instance looks more verbose but actually it's much more powerfull (higher kinded types, scalaz, macros,etc) actually after accustom the syntax you find a clear and concise language (with ugly type annotations)


What do you mean by "stagnation of OCaml"? There was a period some 4-5 years ago where it did seem like there was little development on the language, but things have changed quite a lot since then. Lots of developments in the core language (including first-class modules and GADTs!), and a blossoming of the ecosystem around it. To me, it seems OCaml has never been livelier than now...


Let me start by saying that I <3 Ocaml. I've used it intensively for over 10 years.

By stagnation I mostly meant the comparatively tiny ecosystem of libraries vis-a-vis, e.g. JVM-based languages. In particular, with Akka, Scala has much nicer support for concurrency. The ability to call, and be called by, Java code in the smoothes possible way is also beneficial for my use cases.

I'm surprised that GADTs were added, because a few years ago I visited the Gallium team at INRIA who develop Ocaml, and asked about GADTs. I was told by one of the senior Ocaml developers that there were no plans for the inclusion on this feature.


Has anyone tried running the F# through IKVM[1] (.Net <-> java)? That wouldn't solve this, but it should be possible to run F# on a JavaVM.

[1] http://weblog.ikvm.net/


Not quite the same, but you may be interested in the IKVM type provider prototype[1], which allows you to write F# scripts directly against JAR files (using IKVM in the implementation).

[1] http://colinbul.wordpress.com/2013/02/28/f-ikvm-type-provide...


IKVM runs JVM code on .NET, not the other way around.


That may not be a practically relevant difference. Why would you want to run F# on the JVM? To interface with JVM code. IKVM lets you do that already.

I've rolled out multiple .NET programs that contain Java open source libraries through IKVM. It works just fine, and the amount of extra work you need to do to make the JVM->.NET mapping work is remarkably little.


Or perhaps you'd wish to run F# on the JVM because you like the language, but don't wish to depend on .NET (or Mono).


How is this supposed to work given the JVM does not support tail calls?


In general speech, I can get behind the idea that you shouldn't correct someone if you understand what they mean. In programming, however, I think it's important to be pedantic.

You must mean tail call optimization/elimination here. Clearly, the JVM supports tail calls.


Plus, as it is relatively simple to perform tail call elimination with iteration instead of reusing the frame in many (most?) cases, static analysis can classify those cases and hygienically rewrite those functions.

Most attempts I've seen to exploit TCO either aren't optimizations or can be dealt with via another control mechanism. My suspicion is that it is largely a solution in search of a problem for the programmer attempting to employ the technique, much like using AOP to add logging information for every "enter" and "return" from a method/function.

Mind you, I am in no way dismissing tail-call elimination as an important tool. Just that some of its practitioners (such as whichever idiot who wrote some of the barely-readable code in old programs of mine!) are a bit zealous in its use.


If you want to be pedantic, the JVM does not support tail (function) _calls_, but jumps, which may be the result of tail call optimization or elimination.

As far as I know, we still cannot efficiently turn mutual recursive calls into a chain of jumps, but I'll be glad to be corrected here.


I think there may be a misconception here: A tail call is NOT a call that doesn't allocate a stack frame. A tail call is a call that is the return value of a function. This is why we refer it the stack frame-eliminating optimization as "tail call elimination". We are taking a jump-to-subroutine (call) instruction and replacing it with a normal jump instruction, thus eliminating a "call" from our program.

The JVM most certainly does support general tail calls through its invoke instruction, which supports calls to arbitrary methods of arbitrary objects (which, of course, includes tail calls).

Some compilers also support the optimization of _some_ recursive tail calls (usually recursive tail calls to final or local methods) into loops using the goto instruction, which supports jumps within the current method (essentially optimizing the tail-recursive method into a non-recursive method containing a loop).


Thanks for the clarification.

If I read you correctly, you say, the JVM 'supports tail calls', because it can call other methods. I think, the phrase 'supports tail calls' is as meaningful as 'supports function calls' then.

In any case, I don't think this is a question of optimization. I don't see how, for example, an F# program written in CPS can ever run on the JVM.


Scheme implementations will compile any tail calls into jumps, even if they're mutually recursive. This is easy to understand by doing CPS transformation by hand on such calls.


I think everyone knows what he meant. In programming, everything supports everything anyhow, as long as it's Touring complete.


This isn't really fair: some implementations really do rule out certain optimizations. Turing-Completeness only refers to what is computable, not what actual algorithms and optimization techniques are possible.


And to be more specific, Turing completeness refers to what is computable with an unbounded amount of memory and time. This is why we have complexity theory layered on top of computability theory. Some things are theoretically computable but practically infeasible. Compiler optimizations can help greatly in some of those cases.


Maybe someone who knows Scala or Clojure internals can answer this question..


scala doesn't optimize general tail calls, either (only direct tail recursion of final methods). If you want to optimize tail calls on the JVM, you must use trampolining, which adds some overhead to every call. So most languages choose fast calls without tail call elimination over slower calls with tail call elimination (i.e., speed over correctness).


Support for tail calls pretty much depends on the actual runtime implementation.

If it is important to you (it certainly is to me), use an implementation which supports proper tail calls.

I'm doing it and I have never looked back.


Out of curiosity, which implementation are you using?



This is very cool. Thank you!


Someone has made TCO possible in Clojure already: https://github.com/cjfrisz/clojure-tco


That's very cool. But note that as far as I can tell it only handles mutual recursion, not arbitrary tail calls (e.g. calls to a function argument).


Clojure has a loop/recur construct which makes the tail recursion explicit so doesn't need to be done by the JVM. In other words regular recursive calls should not be used for loops if you want performance.


Scala has a similar explicit @Tailrec annotation for its compiler, too:

"A method annotation which verifies that the method will be compiled with tail call optimization. If it is present, the compiler will issue an error if the method cannot be optimized into a loop."

http://www.scala-lang.org/api/current/index.html#scala.annot...


I think rail recursion can be handled automatically by the compiler without any support from the jvm (scala does it).

The issue is different for general tail call elimination.


Can't you lambda-lift & fully CPS-convert the code somewhere throughout compilation, then use a trampoline at runtime?


I'd say that's a huge performance hit (return, check, call instead of jump). Plus, your JIT might not like it.


How does mono's F# coverage compare to this?


F# works very well on Mono. You can compile and use the open-source edition of the F# compiler [1] on both Linux and on OS X (though I haven't tried OS X).

This port is in very early stages, and only has the start of a lexer as far as I can see.

[1] https://github.com/fsharp/fsharp


F# works very well in Osx and with Xamarin Studio you can target a lot of different platforms: Windows, Osx, Android, and iOS.


FYI, F# 3.0 is shipped as part of the Mono package for OS X.

It's also available through the FreeBSD ports system: http://www.freshports.org/lang/fsharp/


(Replying to myself, since I can't edit the post now...)

More info on F# 3.0 + Mono 3.0 + FreeBSD can be found here: https://groups.google.com/forum/?fromgroups=#!topic/fsharp-o...

Arch Linux users: the thread I linked also has instructions on how to build/install F# 3.0 and Mono 3.0, if you're interested.


There is a good technical thread started on stackoverflow http://stackoverflow.com/questions/15731724/whats-the-easies...


F# <- Ocaml <- ML language

Have a look at Yeti ? http://mth.github.com/yeti/




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: