Hacker News new | past | comments | ask | show | jobs | submit login

For the sake of argument:

What is an example of an optimization that a JIT compiler can make that a AOT compiler cannot?

If the developer is able to profile the application on typical end-user workloads, don't profile-guided optimizations provide the same benefit as JIT runtime profiling?

Why can't an AOT compiler just consider every path a "hot" path?

Last but not least: Got any benchmarks?

For one: JIT can do polymorphic inline caching (you can read more about from Google's senior vice president of operations Urs Hölzle[1]), while AOT can't.

Wikipedia gives a few more[2]: runtime profile-guided optimizations and pseudo-constant propagation

[1] http://research.google.com/pubs/author79.html

[2] http://en.wikipedia.org/wiki/AOT_compiler

The Polymorphic Inline Caching paper refers to AOT compiling with runtime hints.

In the case of non-dynamic languages like C and C++ that clang generally targets, are there other examples of where JIT would make things possible that are not possible in AOT?

Profile guided optimizations that are relevant for the specific invocation of the program. Loop optimization based on invocation parameters for that specific run of the program. Hard-coding in the jump target address for calling functions from dynamically loaded libraries (can't do that AOT, because if the library is replaced, the symbol offsets change).

Optimizing for the specific processor you're running on, as opposed to being forced to compile for a lowest common denominator.

A whole bunch of other small things like that.

One nice thing JITs can do that AOT compilers can't is on-stack replacement. That's where you recompile a particular function at run-time based on new information. This allows you do speculative optimizations.

For example, you might see that branch X is always taken. So you assume that X will always be true, and add a guard just in case which triggers a recompilation. You reoptimized the function on the basis of your new (speculative) information about X. This could improve register allocation, allow you remove lots of code (other branches maybe), inline functions, etc.

Java JITs have been known to inline hundreds of functions deep with this.

A simple example: let program P do a zillion <something> * <command line argument> multiplications, and call the program every hour with argument value zero or one, depending on a coin flip. An AOT compiler would not even know that the program will never be called with other arguments. A JIT compiler could remove all multiplications.

Profile-guided optimizations only work on the next run, and, when used by the developer, do not work for cases where there are widely different usage profiles for a single program. For example, most users would have data sets that fit in memory, but others will have ones that do not.

Wouldn't you get a code explosion and difficulties dealing with cache coherency if every path was a hot path (serious question, I don't know much about this stuff)?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact