Good fun though.
Just translate 'hello' back and forth in any pair of languages I tried.
Neat visualization! I wonder how large it could get if people could add their own nodes and edges.
I think you are looking for WASM and it is doable today. WASM can target the JVM.
- CIL to C++ using CoreRT/AoT compiler 
- JVM to CIL using ikvmc 
- Quite a few languages to WebAssembly using LLVM 
- J# if you are willing to include dead but somewhat significant languages
Languages are compiled to LLVM IR and then to WebAssembly. These are shown, right?
Edit: Or I guess technically python byte code. 
Python doesn't compile Python to C, non-technically or otherwise. Where did you get that impression?
I presume they don't include internal IRs like Python bytecode, as it's not any kind of shared or standardised format. Otherwise you'd be including tons of different compiler IRs.
edit: found the docs on python.org for the compiler 
You may be thinking of Cython, which compiles a language similar to Python, but not quite Python, into C using the Python C extension interface.
Actually, WebAssembly as a compile target is just generally not there on the graph, which is weird. The Wasm compiler scene is pretty rapidly fluctuating though.
That's got to look like spaghetti by the end, no?
42 languages compile to Machine Code
Wait, what? There are 2 languages that don't compile to Machine Code? Which ones are those, and how is it even possible?
Instead of compiling to machine code, you compile to another language instead. C++ was originally compiled to C, for example.
Why would you think it wasn't possible?
It is super-multiplatform.
But there's more to it than that. The bytecode is actually interpreted at first by the JVM runtime. The code is also continuously dynamically profiled. There are two compilers C1 and C2.
Whatever functions are using the most cpu time get compiled using C1. C1 rapidly compiles to poorly optimized code, but this is a big speedup over the bytecode interpreter. The function is also scheduled to be compiled again in the near future using the C2 compiler. The C2 compiler spends a lot of time compiling, optimizing and aggressively inlining.
But there's more. C2 can optimize its compile for the exact target instruction set, plus extensions, for the actual hardware it is running on at the moment. An ahead of time C compiler cannot do that. It needs to generate x86-64 code that runs on a large variety of hardware processors.
But there's more. The C2 compiler can optimize based on the entire global program. Suppose a function call from one author's library to another author's library can be optimized in some way by writing a different version of that function. C2 can take advantage of this and do it where a C compiler can not because it doesn't know anything about the insides of the other library it is calling -- which might be rewritten tomorrow, or might not be written yet. Once the Java program is started, the C2 compiler can see all parts of the running program an optimize as needed.
But there's more. Suppose YOUR function X calls MY function Y. If your function X is using much CPU, it gets compiled to machine code by C1, and then in a short time gets recompiled again by C2. The C2 compiler might inline my Y function into your X function. Now suppose the class containing my Y function gets dynamically reloaded. Your X function now has a stale inlined version of my Y function. So the JVM runtime changes your X function back to being bytecode interpreted once again. If your Y function is using a lot of CPU, then it gets compiled again by C1, and then in a while, by C2.
All this happens in a garbage collected runtime platform.
It is why Java programs seem to start up, but take a few minutes to "warm up" when they start running fast. Many Java workloads are long running servers, so startup is infrequent.
Now you know why Java can run fast for only six times the amount of memory as a C program.
There are other ways to execute Java applications.
If WATFIV was the only language that ran in browsers, everything would target it, too.
It's just that, at least for the first couple decades of its existence, the platform in question had no bytecode, so transpiling was the only way to escape.
The name of that tool is even labelled in the graph!
The machine code isn't the intermediate product - it's the final product. Some internal IR is the intermediate product.
But yes it's normal convention in the industry to talk about JIT compilation to machine code as 'compiling to machine code'.
There are Java compilers directly to native code though, the Oracle JVM isn't the only path.
Dude, if there was some kind of wiki documenting HN lore, there would have to be a page on you drilling this fact into our skulls for years on end! :)
Or with LLVM-IR to C in the last step you could get from Java to C...
0 - https://github.com/cretz/asmble
1 - https://github.com/cretz/asmble/tree/master/examples/rust-re...