The authors may correct me, but I'm pretty sure the project objective in the beginning was to compile ruby to native code. As the project progressed, it was clear it wouldn't be possible without some syntax changes (like type annotations). The changes pile up until it no longer made sense to call the resulting language "ruby" anymore.
The problem with this kind of papers is that they are for a hypothetic programming language based on lambda calculus. In Ruby we have objects, instance variables and blocks. We have inheritance and modules. It's hard to make a 1-1 mapping between the paper and the real language. This is why Matz said (if I remember correctly) that the algorithm would need to be adapted to an object oriented language like Ruby (so you can imagine, the paper doesn't even deal with objects, probably just functions!).
Before developing Crystal we read some papers, but all of them were pretty limited (they only dealt with primitive types like int and float). We had to think of a new algorithm that adapted well to all of the language features. And the algorithm is very specific to the language, which is something good because we can make it optimal, instead of for a general language.
It's hard to describe the algorithm in detail. We'll eventually do so in blog posts.
1) I'm happy with Ruby as it is now and I don't want to see type annotations in method definitions. If I ever need static typing I can pick one of the zillion languages that have static types right now. I believe that I'll be served better than with a language that got them as an afterthought. I still remember C (without ++) and Java.
2) JRuby does JIT and it's not particularly faster than MRI. After all even the JVM must resolve method calls at every method invocation instead of getting them resolved at compile/link time. Rubinius tries to do some magic there but it's still in line with MRI performances (a little slower actually). Based on the benchmarks I saw for JRuby vs MRI I'd say that only long running programs will get sped up, but after all who cares about interactive programs completing in 1 or in 0.01 s? Development time is definitely more important there. However I trust Matz on that speedup factor and any 0.00x% of speed if welcome.
A decent JIT should brings at anywhere from 4 - 20x speedup.
Why is it Matz was only seeing possible 3-4x difference?
I think setting a modest goal was a really intelligent decision by Matz. He wants a good-enough JIT, not the best JIT in the world.
I wonder how this "soft typing" would affect things like method_missing magic and the like. Would it be an optional runtime option to enforce the implicit interface?
Instead any object (though Go has no real objects) is considered to be an implementation of an interface iff it implements all define methods.