What is an example of an optimization that a JIT compiler can make that a AOT compiler cannot?
If the developer is able to profile the application on typical end-user workloads, don't profile-guided optimizations provide the same benefit as JIT runtime profiling?
Why can't an AOT compiler just consider every path a "hot" path?
Last but not least: Got any benchmarks?
Wikipedia gives a few more: runtime profile-guided optimizations and pseudo-constant propagation
In the case of non-dynamic languages like C and C++ that clang generally targets, are there other examples of where JIT would make things possible that are not possible in AOT?
Optimizing for the specific processor you're running on, as opposed to being forced to compile for a lowest common denominator.
A whole bunch of other small things like that.
For example, you might see that branch X is always taken. So you assume that X will always be true, and add a guard just in case which triggers a recompilation. You reoptimized the function on the basis of your new (speculative) information about X. This could improve register allocation, allow you remove lots of code (other branches maybe), inline functions, etc.
Java JITs have been known to inline hundreds of functions deep with this.
Profile-guided optimizations only work on the next run, and, when used by the developer, do not work for cases where there are widely different usage profiles for a single program. For example, most users would have data sets that fit in memory, but others will have ones that do not.