Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Our benchmarks show amazing results... but not in speed. WebAssembly for us is about the size of the binary and the speed of parsing more than the speed of execution... A Javascript JIT with enough execution data can be even better than C in theory.


You could feed the profile data back into the C compiler in theory, so in practice, I think the C could always be faster.


This is called profile-guided optimization.[0] Slightly off-topic, but even Rust has it, courtesy of LLVM.[1]

It doesn't seem very popular unfortunately. Most web/tech companies are obsessive about capturing user interaction data for UX purposes, but it would be nice if that data could be fed back into a profile-guided optimizer.

In theory the compiled WebAssembly output would always be optimized for the latest usage trends, provided there were frequent builds. Depending on the product, it may even be possible to classify users into various high-level categories based on their behavior, and serve them a binary that's optimized for their specific use case. The latter's level of specificity probably wouldn't be worth it though.

[0] https://en.wikipedia.org/wiki/Profile-guided_optimization

[1] https://unhandledexpression.com/2016/04/14/using-llvm-pgo-in...


If you have dynamic data the best optimizations could change, and you can't feed the profile back to the compliler while the program is running. (Well... maybe in theory)


The JVM has been capable of this for a while. I think the overhead of profiling and applying optimizations has always been greater than any efficiency gains.


'the overhead of profiling and applying optimizations has always been greater than any efficiency gains.'

Not necessarily, especially if you explicitly rely on them.

For example in C++ you deliberately have to avoid overusing virtual functions if you don't want the overhead. In Java, virtual functions are the default, and you usually don't care: if you call them a lot (e.g. in a tight loop) JVM will adaptively give you the proper implementation - without per call indirection - or may even inline it.

If you code C++ in Java style then JVM will win - so the overhead of (runtime) profiling and applying optimizations not always greater


G++ is able to optimize virtual calls to non-virtual calls in most cases (speculative devirtualization) by adding quick checks into the code (like if(obj->vtable == Class::vtable) Class:virtualMethod(obj);) which the JVM would do as well.

So if G++ is able to tell the code paths leading to the virtual call, it does the same optimization as the JVM.


g++ cannot do that across dynamic libraries. It needs to see the source code at compile time.

This is where JIT wins, as it can do that with binary deployed code.


I think that's almost certainly false. JIT'd java is almost certainly faster than bytecode interpreted java.

edit: or do you mean that the overhead of applying optimizations and profiling outweigh the performance gains relative to compiling from something like c directly to machine code?


I mean the collection and application of the optimizations take up more CPU than the optimizations remove. This isn't JIT but real time optimizations (https://en.wikipedia.org/wiki/Adaptive_optimization) while the system is running.


It depends a lot on things like how long the program runs for, and whether you consider spare cores on the machine to be "taking up CPU" or not.

Profile-guided optimisations win you about 20% in most industrial Java, apparently. So if you have a server that starts, warms up in say 30 seconds, and then runs for a week, you can see that it is easily profitable. For a command line app that lives for 500 msec, not so much.


I've always wondered what C running on a JIT would be like. As far as I'm aware all the modern JIT languages are quite high level and do a lot more than C to begin with, so it would be interesting to see if you could eek out even more speed from C using a JIT.


A JIT faster than C? I thought in theory it would always have to be slower because you have the overhead of the JITing. Also, isn't compiling C to native, the best you can hope for anyway? What could be faster than that, except for asm?


Typical benchmarks for jitted languages like javascript don't include the jitting in their numbers.

Jits can beat C in performance in situations where understanding how the program behaves at runtime influences code generation optimizations.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: