Clojure doesn't need invokedynamic in general because Clojure doesn't really dispatch dynamically. It's dynamically typed, but calls are made via a known path to a specific function, and only via rebinding can that target change. The general case goes straight through.
In a sense, it's like Clojure is JRuby where all classes only have one method. With only one method, you can always just call straight in, no lookup required, and avoid dynamic lookup.
If Clojure had to deal with dynamically selecting a method based on a target object type, incoming parameter types, and so on, invokedynamic would be more useful. Indeed, Clojure often hacks around such cases by allowing you to specify Java signatures (to avoid reflection) or primitive types (to avoid boxed math). Both cases could be improved without ugly syntax if invokedynamic were used.
So to summarize, Clojure could get benefit out of invokedynamic...if it hadn't already added hacks and syntax to allow opting out of the most-dynamic cases.
In JRuby, we could avoid a lot of dynamic dispatch by allowing type signatures, type declarations, and so on. But that wouldn't be Ruby, and we don't control Ruby. JRuby compares favorably to Clojure without the typing syntax, performance-wise, which is quite acceptable to me. With invokedynamic, JRuby even compares favorably to Java, with the exception of boxed math (which is very hard to optimize in a dynamic language). For equivalent work, JRuby with invokedynamic is within 2x slower than Java, and that will continue to improve as the Hotspot guys optimize invokedynamic.
calls are made via a known path to a specific
function, and only via rebinding can that target
change. The general case goes straight through.
If Clojure had to deal with dynamically
selecting a method based on a target object
benefit out of invokedynamic...if it hadn't
already added hacks and syntax
JRuby compares favorably to Clojure without
the typing syntax, performance-wise, which
is quite acceptable to me. With invokedynamic
as the Hotspot guys optimize invokedynamic.
Regarding invokedynamic optimization: Most of what I'm playing with will go into the first Java 7 update. I'm helping to validate that it works, it's fast, and it's ready for consumption.
there's more that invokedynamic could do
Most of what I'm playing with will
go into the first Java 7 update.
Clojure supports unboxed math. One big example to why this important - it's possible to implement Clojure's persistent data structures (and new ones) in Clojure itself. They are the very cornerstone of what Clojure is all about and this is a case where unboxed performance makes all the difference in the world.
Persistent data structures are not that relevant to JRuby and thus you don't need to make that design decision.
I can appreciate it was an explicit design decision. My calling it a "hack" is to blunt claims that Clojure is faster than equivalent JRuby code, when in actuality the Clojure code is not equivalent. Clojure doing fully boxed math performs similarly to JRuby doing fully boxed math. Apples to apples instead of apples to statically-typed oranges.
That said, I have wanted to do the same in JRuby. I do not have the freedom to make such a decision for Ruby, however, and Matz (Ruby's creator) has said there will never be static types. So...I will continue to work to make dynamic-typed math as fast as possible, and push the JVM to help me as much as it can. I won't hack around it :)
In the meantime, I greatly appreciate your pushing. :-D
 e.g. http://blogs.oracle.com/jrose/entry/fixnums_in_the_vm
--fast is mostly eliminated now, with all features either on by default or replaced by better compilation.
Math-like dispatches go through different dispatch logic optimized for Fixnums and Floats, this is true. If possible, they dispatch directly rather than dynamically. But they do this under global "isFixnumModified" and "isFloatModified" guards, to match Ruby behavior. The extra check did not significantly add to the overhead of math operations.
Of course with invokedynamic, all dispatch is done the same way, and indy optimizes as well as our old tricks in both the unmodified and modified cases. Yay invokedynamic!
Of course, the is-modified flags don't do much for large, idiomatic Ruby applications. As soon as anyone modifies Object, Class, or Fixnum (as seen in Rails), it goes right back to slow dispatch mode, which is why invokedynamic is so great.
I think many people don't realize that "Ruby is slow" because they're building on libraries that make extensive use of Ruby's dynamism with little regard given to performance. Imagine implementing Rails without *eval, method_missing and singleton classes.
Pull up a Rails console in an application with a few plugins, run "included_modules" on a class, and you'll see how many potential modules are involved in invocation. The last Rails project I worked on has 61 modules included into ActiveRecord::Base, mostly by plugins. I think there is plenty of abuse of the extension facilities that results in this type of overhead (what justifies Ym4r::GmPlugin getting included into Class?), but this is the world in which we live.
For a dynamic language to add static types solely to get performance
seems like a hack to me.
In any case, I know JRuby's not going to be able to get boxed math as fast as primitive math, and that's ok. We'll just make it easy to use languages that can do fast math, and make method invocation against normal objects as fast as possible to compensate.
While I consider type hinting an uglyish wart to work around limitations of the underlying VM, I also secretly wish we had such an escape hatch in JRuby. Oh well.
Interesting. We maintain a Java-based simulation toolkit in which the target language (Clojure, whatnot) must make a lot of calls to Java and work with a lot of mutable Java data. And it's been our experience that Clojure is -- I do not make up this number -- approximately 1000 times slower than Kawa for such tasks. Mostly because of immutability guarantees and Refs. Indeed, it's one of the slowest languages on the block for this task.
I wonder how Kawa would perform in his web development arena. I've found it the fastest non-Java JVM language anywhere.
The features clojure provides are somewhat slower, but well worth it. The purely functional datastructures are ~25% slower than their java equivalents. The seq abstraction is slightly slower, but allows handling infinite sequences, and only passes over the data once.
About the only thing I'm aware of that can slow your code down that much is by using reflection in a tight loop.
Hop on to #clojure on irc.freenode.net, we'd love to help you out.
Those Clojure benchmarks in the above post were submitted by me to show that you can get Java/Scala performance.
EDIT: Toned down the rhetoric
This is a Prolog-like engine that I benchmark against SWI-Prolog (written in C). It comes close on some benchmarks, and surprisingly surpasses on a few.
i've been using clojure for numerical processing this last few weeks and have been impressed how well it mixes lazy sequences and vectors. but i haven't had to face a case like you seem to be having.
I wonder if it's possible to use invokedynamic when there is an unhinted java interop to avoid drastic slowdown that I've experienced. Because it's obvious that many necessary type hints would be absent in a big real-world program.
In my case, I'm trying to bring some Clojure into an existing project with a sizable Java codebase.
Anyway, I think that your clojure code would use String. Or joda's DateTime and zillion other brilliantly useful classes. It's not practical to either wrap or rewrite all the Java richness.
Concerning your example: Clojure also does a good work of inferring types given a few type hints, and that I good use. Just a few hints, a great speed up.
Perhaps, but from my understanding with InvokeDynamic, for a function that ends up with for instance adding two numbers, we can guarantee the types even though the code is dynamic. This means the JVM can perform optimizations like inlining. This invokeinterface call could then be replaced with an "iadd" bytecode at the call site, which in turn can get JITed, and so on.
So Add looks like this: Object Add(Object, Object)
Now if Clojure recognizes that this is often called with longs, then it would make sense to produce what is C++ would be a template specialization: long Add(long, long). As you pointed out, that could then be inlined.
As long as we can specialize all the way up the call chain, we don't need any JVM magic. But that would be a lucky case, and I believe invokedynamic can help with our problem case: an argument-dependent transition from non-specialized code to specialized code. i.e., call Add(long, long) iff both args are Longs, otherwise call Add(Object, Object).
So, Clojure would profit from invokedynamic if it (transparently) introduced specialized methods for optimization. Of course, this is not exactly a trivial optimization to implement - it may be that the most practical way to implement it is to look at the runtime behaviour (sort of like what the JIT compiler does). Ideally the JVM would be able to do all this magic specialization for us (given the parallels to JIT compilation), but I doubt that it can do that at present.
Anyone know if I'm off the mark here, or whether Clojure would benefit if it did what I've termed "specialization"?
However, when dealing with optimizations, actual experimentation is a lot more convincing. I've seen far too many times when things 'should have been' better a certain way, but weren't... For various reasons.
- JRuby 2
- JRuby with current invokedynamic 35
- JRuby with some dev build that some guy who knows a guy who knows a guy at Oracle gave him: 0.5 
The future might look bright, but a problem with the future is that it comes when it's ready and never before, no matter how much we want it to or might need it.
: I'm just being playful here. I actually have no idea where Charles got that dev build.
Regarding , I made the build myself. Here's my build script and instructions: https://gist.github.com/1148321
To get the best possible performance, you'll want several unapplied patches from the Hotspot team. Ask on the mlvm-dev list.
Paul Stadig points out that Clojure actually does lookup (from mostly-immutable sources) in many, many cases, and correctly theorizes that it could benefit by using invokedynamic in those cases. Worth a read.
Rich Hickey's Clojure bookshelf includes Lisp in Small Pieces, which I've read most of, and I understand is sort of the go-to Lisp implementation book nowadays, but is possibly out of date for compilers in general. It also includes Essentials of Programming Languages (2nd ed?), Concepts Techniques and Models, the T Programming Language book, the JVM spec... but are there other books that would be crucial to enabling someone to open up the Clojure source and understand some of the optimizations or understand some of the issues re: dynamic languages on the JVM, compiling dynamic languages, and/or making dynamic languages fast in general (see SBCL)?
I know there's "Clojure in Small Pieces" which aims to be some kind of literate treatment of the Clojure source, but I think that's a WIP last time I checked.
edit gee look at this, I commented without looking at the rest of the front page: http://news.ycombinator.com/item?id=2927784
Ah, now I understand better.