Hacker News new | comments | show | ask | jobs | submit login
Why Clojure doesn't need invokedynamic (groups.google.com)
148 points by apgwoz 2007 days ago | hide | past | web | 53 comments | favorite



I'll do a bulk reply since there's lots of comments here I would reply to.

Clojure doesn't need invokedynamic in general because Clojure doesn't really dispatch dynamically. It's dynamically typed, but calls are made via a known path to a specific function, and only via rebinding can that target change. The general case goes straight through.

In a sense, it's like Clojure is JRuby where all classes only have one method. With only one method, you can always just call straight in, no lookup required, and avoid dynamic lookup.

If Clojure had to deal with dynamically selecting a method based on a target object type, incoming parameter types, and so on, invokedynamic would be more useful. Indeed, Clojure often hacks around such cases by allowing you to specify Java signatures (to avoid reflection) or primitive types (to avoid boxed math). Both cases could be improved without ugly syntax if invokedynamic were used.

So to summarize, Clojure could get benefit out of invokedynamic...if it hadn't already added hacks and syntax to allow opting out of the most-dynamic cases.

In JRuby, we could avoid a lot of dynamic dispatch by allowing type signatures, type declarations, and so on. But that wouldn't be Ruby, and we don't control Ruby. JRuby compares favorably to Clojure without the typing syntax, performance-wise, which is quite acceptable to me. With invokedynamic, JRuby even compares favorably to Java, with the exception of boxed math (which is very hard to optimize in a dynamic language). For equivalent work, JRuby with invokedynamic is within 2x slower than Java, and that will continue to improve as the Hotspot guys optimize invokedynamic.


Aiyayyay. Were to begin.

    calls are made via a known path to a specific 
    function, and only via rebinding can that target 
    change. The general case goes straight through.
This is half true. The rebinding via def will change the target, but the call always happens via a lookup through a volatile. In fact, it's this very chain that can benefit from invokedynamic.

    If Clojure had to deal with dynamically 
    selecting a method based on a target object 
    type
Clojure's protocols are polymorphic on the type of the first argument. The protocol functions are call-site cached (with no per-call lookup cost if the target class remains stable). This is another place where invokedynamic would help.

    benefit out of invokedynamic...if it hadn't 
    already added hacks and syntax
I think you're mixed up here. The JVM already optimizes classes and interfaces. All that Clojure's type hints allow you to do is to help the compiler in the cases where it's unable to infer the proper type. Hinting is not always necessary and better inferencing will start chipping into the remaining cases.

    JRuby compares favorably to Clojure without 
    the typing syntax, performance-wise, which 
    is quite acceptable to me. With invokedynamic
That's because JRuby is awesome tech. You've certainly set the bar high for dynamic dispatch that invokedynamic has yet to meet.

    as the Hotspot guys optimize invokedynamic.
What's the over-under on it happening by Java8? Java9?


I will grant I do not know the details of Clojure's dispatch protocols, but I've always felt like there's more that invokedynamic could do. Sounds like that's the case. My statements were based mostly on brief discussions with Rich about how dispatch happens...which made most of them sound pretty static in nature.

Regarding invokedynamic optimization: Most of what I'm playing with will go into the first Java 7 update. I'm helping to validate that it works, it's fast, and it's ready for consumption.


    there's more that invokedynamic could do
Definitely. As with anything, there are tradeoffs, so we'll likely hold off to see how it plays out.

    Most of what I'm playing with will 
    go into the first Java 7 update.
Hey no fair, you have inside information. All bets are off. This is great news actually. :-)


It's all public on the MLVM list. Unfortunately there's only a handful of dynlangs actively hitting this stuff. Would love to see more Clojure folks get involved.


While I respect your work on JRuby, I think you're not being honest about JRuby performance here nor being fair about Clojure's design decisions - they are not hacks.

Clojure supports unboxed math. One big example to why this important - it's possible to implement Clojure's persistent data structures (and new ones) in Clojure itself. They are the very cornerstone of what Clojure is all about and this is a case where unboxed performance makes all the difference in the world.

Persistent data structures are not that relevant to JRuby and thus you don't need to make that design decision.


For a dynamic language to add static types solely to get performance seems like a hack to me. Make the dynamic language fast enough that you don't need static types.

I can appreciate it was an explicit design decision. My calling it a "hack" is to blunt claims that Clojure is faster than equivalent JRuby code, when in actuality the Clojure code is not equivalent. Clojure doing fully boxed math performs similarly to JRuby doing fully boxed math. Apples to apples instead of apples to statically-typed oranges.

That said, I have wanted to do the same in JRuby. I do not have the freedom to make such a decision for Ruby, however, and Matz (Ruby's creator) has said there will never be static types. So...I will continue to work to make dynamic-typed math as fast as possible, and push the JVM to help me as much as it can. I won't hack around it :)


It's a narrow point, but the coincidence of "make the dynamic language fast enough that you don't need static types" and talk about equivalent perf using boxed math reminds me of the old saw about a sufficiently smart compiler[1]. Yeah, maybe the hotspot/jrockit wizards will be able to wave a wand and make everything fast. But, until the stars align in that department,[2] being able (and in Clojure's case, defaulting) to fast, static, primitive math is a good thing IMO, and really, really important to getting work done in certain domains today.

In the meantime, I greatly appreciate your pushing. :-D

[1] http://c2.com/cgi/wiki?SufficientlySmartCompiler

[2] e.g. http://blogs.oracle.com/jrose/entry/fixnums_in_the_vm


JRuby generates direct, "static" dispatches to the Fixnum class when you throw the --fast flag. However, this is not backwards compatible with MRI Ruby, so it's not default. Clojure does not attempt to maintain compatibility with anything, so there you go.


That's not true anymore.

--fast is mostly eliminated now, with all features either on by default or replaced by better compilation.

Math-like dispatches go through different dispatch logic optimized for Fixnums and Floats, this is true. If possible, they dispatch directly rather than dynamically. But they do this under global "isFixnumModified" and "isFloatModified" guards, to match Ruby behavior. The extra check did not significantly add to the overhead of math operations.

Of course with invokedynamic, all dispatch is done the same way, and indy optimizes as well as our old tricks in both the unmodified and modified cases. Yay invokedynamic!


Cool.

Of course, the is-modified flags don't do much for large, idiomatic Ruby applications. As soon as anyone modifies Object, Class, or Fixnum (as seen in Rails), it goes right back to slow dispatch mode, which is why invokedynamic is so great.


I certainly agree. As an implementer of Ruby, I've wanted type-hinting escape hatches to make my job easier. Given that they're likely never going to happen, I will continue to optimize the hard way and work closely with JVM guys :)


I love your commitment to purity here. Things could certainly be made much faster in Ruby if most of what makes it useful in certain contexts were abandoned. The complexity of invocation alone seems to be fairly unique with the potentially vast tree of class/module hierarchy along with metaclass, method_missing, and first-class, dynamic invocation targets (classes and modules).

I think many people don't realize that "Ruby is slow" because they're building on libraries that make extensive use of Ruby's dynamism with little regard given to performance. Imagine implementing Rails without *eval, method_missing and singleton classes.

Pull up a Rails console in an application with a few plugins, run "included_modules" on a class, and you'll see how many potential modules are involved in invocation. The last Rails project I worked on has 61 modules included into ActiveRecord::Base, mostly by plugins. I think there is plenty of abuse of the extension facilities that results in this type of overhead (what justifies Ym4r::GmPlugin getting included into Class?), but this is the world in which we live.


  For a dynamic language to add static types solely to get performance
  seems like a hack to me.
Be that as it may, that's Common Lisp's approach as well. I never have added type annotations in Common Lisp, but I like the possibility of giving the compiler more info. (If I understand your point correctly.) The particularly nice thing is that you can get the disassembly of a function, to see what effects your changes have on optimization. (http://www.psg.com/~dlamkins/sl/chapter16.html)


Type annotation wouldn't do much for Ruby honestly. Most idiomatic Ruby code (i.e. loading Rails into the object space at all) would break any optimization type annotation would yield. There's no compiler, so guards would have to be inserted to perform runtime type analysis (potentially very costly in Ruby). Common LISP actually has static, scalar types, so type annotation has direct application. Ruby doesn't really have a notion of types in the classical sense, and ALL dispatch targets (even those written in C or "primitives") are extensible and modifiable by Ruby code.


This is one reason I never explored it. There has been some research into "gradually typing" dynamic-typed systems, and it gets hairy pretty quick.

In any case, I know JRuby's not going to be able to get boxed math as fast as primitive math, and that's ok. We'll just make it easy to use languages that can do fast math, and make method invocation against normal objects as fast as possible to compensate.


Of course you can get the disassembly of JITed code from Hotspot too, and the effects of boxed math are quickly visible. Escape analysis may help in the future, if it can be made more general.

While I consider type hinting an uglyish wart to work around limitations of the underlying VM, I also secretly wish we had such an escape hatch in JRuby. Oh well.


>> I work a lot with Rhino, JRuby, Jython, Groovy and Quercus (a JVM PHP engine), and can anecdotally say that Clojure and Quercus are "fastest" of their breed for the kind of web development work I do.

Interesting. We maintain a Java-based simulation toolkit in which the target language (Clojure, whatnot) must make a lot of calls to Java and work with a lot of mutable Java data. And it's been our experience that Clojure is -- I do not make up this number -- approximately 1000 times slower than Kawa for such tasks. Mostly because of immutability guarantees and Refs. Indeed, it's one of the slowest languages on the block for this task.

I wonder how Kawa would perform in his web development arena. I've found it the fastest non-Java JVM language anywhere.


In my experience (using Clojure full time since before there were official releases), Clojure can be written exactly as fast as java. That isn't idiomatic clojure code however.

The features clojure provides are somewhat slower, but well worth it. The purely functional datastructures are ~25% slower than their java equivalents. The seq abstraction is slightly slower, but allows handling infinite sequences, and only passes over the data once.

About the only thing I'm aware of that can slow your code down that much is by using reflection in a tight loop.

Hop on to #clojure on irc.freenode.net, we'd love to help you out.


I'm actually not able to follow your usage scenario. Do you mind providing a representational bit of code to illustrate?


Your claim does not line up with reality. http://shootout.alioth.debian.org/u32/which-programming-lang...

http://blog.dhananjaynene.com/2011/08/cperformance-compariso...

Those Clojure benchmarks in the above post were submitted by me to show that you can get Java/Scala performance.

EDIT: Toned down the rhetoric


I think it is interesting you point the parent out for not lining up with reality. Can you elaborate on how the benchmarks in the above posts are a more realistic way to reason about the performance than the experience of the language used in a real life project?


These exact same techniques used in these benchmarks are used in "real life" projects, for example this one that I work on: https://github.com/clojure/core.logic

This is a Prolog-like engine that I benchmark against SWI-Prolog (written in C). It comes close on some benchmarks, and surprisingly surpasses on a few.


Fair enough, however I don't see a benchmark which measures the performance of Clojure in a situation described as above (dealing with a lot of mutable java objects) compared to other (JVM) languages. He clearly mentioned, for such tasks, clojure is 1000 times slower than Kawa. The fact that Clojure is fast for other types of programming problems is irrelevant for this particular case.


If the poster had could give a (minimal) example to demonstrate their claims, then we could know for sure. Otherwise, we have no idea what the problem was.


are you making many small calls, each of which modifies a small part of a large structure, and clojure is creating a new instance of the large structure each time? that sounds like a pathological case for idiomatic clojure, but i would have thought you could work round it by keeping the data in a more "java like" format without too much pain.

i've been using clojure for numerical processing this last few weeks and have been impressed how well it mixes lazy sequences and vectors. but i haven't had to face a case like you seem to be having.


When I wrote some cpu-bound code in Clojure which also used several java classes and libraries, I've found that the naive implementation 5x as slow as Rhino implementation, but when I've added some type hints (which I found out by enabling warn-on-reflection), code became ten times as fast, beating rhino twofold.

I wonder if it's possible to use invokedynamic when there is an unhinted java interop to avoid drastic slowdown that I've experienced. Because it's obvious that many necessary type hints would be absent in a big real-world program.


In a big real world Clojure program you would probably control the amount of interop, or abstract it away. Lots of type hints is not indicative of a well-written Clojure program of any size, a good example of how this can be done over JDK 7 ForkJoin, https://gist.github.com/888733#file_forkjoin.clj. Note the code at the bottom has no type hints yet will be nearly as fast as if you littered code with such noise - this is because of the lovely thing that is JVM inlining.


This is in the case when the program is pure-clojure.

In my case, I'm trying to bring some Clojure into an existing project with a sizable Java codebase.

Anyway, I think that your clojure code would use String. Or joda's DateTime and zillion other brilliantly useful classes. It's not practical to either wrap or rewrite all the Java richness.

Concerning your example: Clojure also does a good work of inferring types given a few type hints, and that I good use. Just a few hints, a great speed up.


We can agree to disagree. You have clojure.string. For Joda DateTime you have clj-time. There are a growing number of Clojure libraries that provide an idiomatic interface to Java functionality w/o resorting to pervasive type-hinting.


>"So, when Clojure calls a function, it either already has the instance in its entirety (a lambda) or it finds it by dereferencing a binding. Since all functions are instances that implement IFn, the implementation can then call invokeinterface, which is very efficient. "

Perhaps, but from my understanding with InvokeDynamic, for a function that ends up with for instance adding two numbers, we can guarantee the types even though the code is dynamic. This means the JVM can perform optimizations like inlining. This invokeinterface call could then be replaced with an "iadd" bytecode at the call site, which in turn can get JITed, and so on.


I'd like to know more about this. My understanding is that every method in Clojure has this sort of signature: Object F(Object, Object,.. Object)

So Add looks like this: Object Add(Object, Object)

Now if Clojure recognizes that this is often called with longs, then it would make sense to produce what is C++ would be a template specialization: long Add(long, long). As you pointed out, that could then be inlined.

As long as we can specialize all the way up the call chain, we don't need any JVM magic. But that would be a lucky case, and I believe invokedynamic can help with our problem case: an argument-dependent transition from non-specialized code to specialized code. i.e., call Add(long, long) iff both args are Longs, otherwise call Add(Object, Object).

So, Clojure would profit from invokedynamic if it (transparently) introduced specialized methods for optimization. Of course, this is not exactly a trivial optimization to implement - it may be that the most practical way to implement it is to look at the runtime behaviour (sort of like what the JIT compiler does). Ideally the JVM would be able to do all this magic specialization for us (given the parallels to JIT compilation), but I doubt that it can do that at present.

Anyone know if I'm off the mark here, or whether Clojure would benefit if it did what I've termed "specialization"?


I love logic.

However, when dealing with optimizations, actual experimentation is a lot more convincing. I've seen far too many times when things 'should have been' better a certain way, but weren't... For various reasons.


Logic isn't even required in this case (although I suppose it can never hurt). Charles Nutter has shared some numbers for JRuby constant lookups at http://blog.headius.com/2011/08/invokedynamic-in-jruby-const...

- JRuby 2

- JRuby with current invokedynamic 35

- JRuby with some dev build that some guy who knows a guy who knows a guy at Oracle gave him: 0.5 [1]

The future might look bright, but a problem with the future is that it comes when it's ready and never before, no matter how much we want it to or might need it.

[1]: I'm just being playful here. I actually have no idea where Charles got that dev build.


invokedynamic is extremely powerful whenever there's a need to dynamically bind some function or variable. That means even statically-typed languages might benefit. In JRuby, it brings us within a stone's throw of Java performance for simple algorithms, provided that the Java version is doing the same work (e.g. doing boxed math for math algorithms).

Regarding [1], I made the build myself. Here's my build script and instructions: https://gist.github.com/1148321

To get the best possible performance, you'll want several unapplied patches from the Hotspot team. Ask on the mlvm-dev list.


Another general response...

Paul Stadig points out that Clojure actually does lookup (from mostly-immutable sources) in many, many cases, and correctly theorizes that it could benefit by using invokedynamic in those cases. Worth a read.

https://groups.google.com/d/msg/clojure/1mr0m-9XLoo/3_-OIFM8...


This is why I'm in college. I want to be able to understand this sort of thing better.


Charles Nutter has a great writeup from a few years ago of how invokedynamic is helpful for JRuby: http://blog.headius.com/2008/09/first-taste-of-invokedynamic...


At what college are they teaching this stuff? Are there schools that discuss the JVM environment outside of just using it for Java?


The general principles underlying the JVM are taught in Freshman-level class taught by a particular professor here.


You don't have to be in college to understand that. Internet has more than enough resources for you to learn.


Hey, so while we're talking about it, what WOULD be some good resources on the internet (and maybe in books) for understanding "this sort of thing" better? I assume "this sort of thing" is runtimes, VMs, compilers, languages, dynamic and otherwise?

Rich Hickey's Clojure bookshelf includes Lisp in Small Pieces, which I've read most of, and I understand is sort of the go-to Lisp implementation book nowadays, but is possibly out of date for compilers in general. It also includes Essentials of Programming Languages (2nd ed?), Concepts Techniques and Models, the T Programming Language book, the JVM spec... but are there other books that would be crucial to enabling someone to open up the Clojure source and understand some of the optimizations or understand some of the issues re: dynamic languages on the JVM, compiling dynamic languages, and/or making dynamic languages fast in general (see SBCL)?

I know there's "Clojure in Small Pieces" which aims to be some kind of literate treatment of the Clojure source, but I think that's a WIP last time I checked.

Thanks!

edit gee look at this, I commented without looking at the rest of the front page: http://news.ycombinator.com/item?id=2927784


You're conflating what [you need | you think I need] with what I actually do need. They are not equivalent. The programming I do daily to make money consumes my non-school time. Plus even before I started classes and was programming professionally, this likely isn't something I wouldve been able to grasp in a reasonable timeframe. See, I'm usually (and happily) the stupidest guy in a room full of programmers. Clearly you're not; good for you. But Internet resources about the internal workings of compilers aren't that great for dumbasses like me.


This is a nice writeup, but it seems to miss some important details. Clojure may not need invokedynamic in its current state, but it could certainly benefit should many of its limitations dissipate.


Could you get into some more detail on what we would win? The author mentitiond static methods als clojure functions. What else?


Well, the question is what does invokedynamic give? In a nutshell, it's the raw material for building efficient polymorphic inline caches that are subject to finer grained HotSpot optimizations. At the moment Clojure and JRuby (and others for sure) build those PICs from "something else" (a technical term). If the speed promises of invokedynamic are ever realized (much less released) then bingo, speed gain. But Charles and Rich have set the bar high for PIC speed, and invokedynamic has not met that challenge. It probably will... one day. But there are tradeoffs that I didn't even touch on even when/if that day comes.


> for building efficient polymorphic inline caches

Ah, now I understand better.


Just curious why a single class with a single method taking object array would not solve the issue? Granted the dynamic marshall would be a perf hit. Other than that?


Is this available without logging into a proprietary service to read it?



You do not have to log into anything to read it at the submitted link.


You might not have had to, I did. Also, the gmane link was clearer - both in that this was just a Usenet post, and which group it was.


Not Usenet. I'd never seen gmane before. Neat.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: