> You know the part where the JIT decides whether to execute X or Y? That can easily be implemented in a conditional in compiled code with the two code paths, X and Y.
And that's the difference with a JIT compiler: if the value is almost certainly X, then it just emits the code for X and a stub for Y. The stub just bails back into the interpreter. The check is a couple machine instructions and the fast path can be pipelined (since branch prediction is unlikely to fail).
An AOT compiler has to include the full code for both X and Y, no matter how unlikely Y is. AOT compiling dynamically typed languages gets you easily 80%-90% unused code.
Then why isn't your argument against code size rather than performance? Seems like an odd position to take. The fast path would also be branch predicted in an AOT case.
The only difference in this hypothetical is that the slower path would run faster.
Because with 80%+ unused code, your instruction cache is gonna choke.
Also JITs can remove many more type checks, because it knows when a check would be redundant. See e.g. Lazy Basic Block Versioning[1], but existing tracing JITs do quite well also.
And that's the difference with a JIT compiler: if the value is almost certainly X, then it just emits the code for X and a stub for Y. The stub just bails back into the interpreter. The check is a couple machine instructions and the fast path can be pipelined (since branch prediction is unlikely to fail).
An AOT compiler has to include the full code for both X and Y, no matter how unlikely Y is. AOT compiling dynamically typed languages gets you easily 80%-90% unused code.