I tossed out Clojure as a random language -- all my testing was done with the most efficient Java and C# I could write.
The link is fine, but only shows micro-benchmarks. It's much easier to get a 3x difference in execution speed between compiler versions when you have a three-line example -- you're maybe exercising 5% of the optimizer?
The current version of the C++ I was using comes in at around 50kloc. The Java and C# versions were a bit smaller, since once I stopped testing the platforms, all the new code only went into the C++ version, but it's a significant piece of work. I haven't seen any significant real application for which the theoretical benefits of Java (JIT's profile-guided optimizations, etc.) have really come to pass. You can write slow C++ code, but if you know what you're doing and instrument carefully, there's very little or no evidence that Java can be as fast on real programs.
> It's much easier to get a 3x difference in execution speed between compiler versions when you have a three-line example
Those three lines can be as well a bottleneck in a big program, responsible for 80% of its runtime. This example was to show that compiler-induced performance differences in a single language can be just as large (here: up to 4x) as differences between Java and C++ in microbenchmarks. Therefore 2x difference in a microbenchmark where Java is losing to C++ is probably not a statistically significant difference, even though it may be important to the end user.
If you rerun this benchmark in 3 years from now, using newer versions, you might as well get completely different results.
As for real applications - that you don't know of any real, fast Java program, doesn't mean there exist no evidence. Sure, it is quite hard to find a pair of two programs doing exactly the same written in two different languages, but there do exist quite a few high-performance Java apps out there which are #1 in their class: Netty, Apache Cassandra, Apache Spark, LMAX disruptor (Java used for ultra low-latency app, see: http://programmers.stackexchange.com/questions/222193/why-di...). Someone also ported Quake2 to Java (Jake2) just to show it was possible, and that didn't make it slower.
I'm not saying there's never a use case for Java. It's reasonably well suited for server-side work (and most of your examples are IO-bound server processes).
When your main problem is IO, thread contention, etc., Java gives you better tools to address those problems. It's just not at all well suited for the kind of client apps that people run on their desktop computers, and to a lesser (but still visible) degree, it's not well suited for CPU-bound things like a lot of scientific computing, which is more where my expertise lies.
Looking at Jake2, the author notes: "The 0.9.2 release shows a big improvent due to the new “fastjogl” OpenGL renderer. The new renderer reduces the number of native interface calls. JNI calls produce a considerable overhead and were the main bottleneck in our application." If he's doing enough JNI to make this the main bottleneck, then clearly quite a lot of the heavy lifting isn't being done with Java.
The link is fine, but only shows micro-benchmarks. It's much easier to get a 3x difference in execution speed between compiler versions when you have a three-line example -- you're maybe exercising 5% of the optimizer?
The current version of the C++ I was using comes in at around 50kloc. The Java and C# versions were a bit smaller, since once I stopped testing the platforms, all the new code only went into the C++ version, but it's a significant piece of work. I haven't seen any significant real application for which the theoretical benefits of Java (JIT's profile-guided optimizations, etc.) have really come to pass. You can write slow C++ code, but if you know what you're doing and instrument carefully, there's very little or no evidence that Java can be as fast on real programs.