
A peephole optimiser for Java bytecodes (1998) - networked
https://frippery.org/jopt/index.html
======
kodablah
I've often wondered why Java/JVM projects don't use more optimization (a la
LLVM) during compilation. I understand there is some optimization at JIT time,
that premature optimization can be unnecessary, it would require whole-program
opts in some cases, and that tools like Proguard do have a few. But in general
when I read the bytecode produced by javac, it is very much like what I wrote
and I look at it knowing some optimizations I could even make by hand with
regards to unused dup+pop, not re-accessing final fields, etc.

~~~
aardvark179
Most of these really don't matter as soon as you are using a JIT, and some
that seem entirely obvious are much more problematic than you might hope.

Final field values can be changed through reflection and some other methods.
The Java Language Specification allows a compiler/VM to ignore changes to
final fields, but several libraries require them to be seen. So the that can't
be optimised at the bytecode level.

The JIT can optimise final field access for some internal classes, and others
if an option is set (see [https://shipilev.net/jvm-anatomy-park/17-trust-
nonstatic-fin...](https://shipilev.net/jvm-anatomy-park/17-trust-nonstatic-
final-fields/)) and in Truffle based languages like TruffleRuby
([https://github.com/oracle/truffleruby](https://github.com/oracle/truffleruby))
we can tell the JIT that certain things are truly constant, but this can't be
done universally for Java.

If you're producing bytecode and you want it to run fast then your best bet is
to produce bytecode that is similar to javac's as that's what Hotspot has been
tuned to run fast, even if it doesn't look optimal.

