edit: the JS code is a bit buggy: it does |if (Math.abs(dblDiscriminant) < 0.01)| in the |intersection| function, but dblDiscriminant is a var set in the next loop! Math.abs(undefined) is something we can easily optimize in the JIT, but really the JS code is broken. Folding this in our JIT  improves this to < 130 ms for me.
Please let us know if you run into JS perf problems, these bugs are usually easy to fix.
It rendered this image: http://renderer.ivank.net/balls.jpg :)
I also made the "full" 3D game: http://powerstones.ivank.net/ (uses a GLSL shader).
On the other hand, on Chrome the WebASM performance is ~100ms and JS is just a tiny bit slower at around 120ms!
I've jumped through similar hoops in Go, and even C++ to avoid allocation or GC.
There are a few like that.
- Oberon family (Oberon, Oberon-2, Active Oberon, Component Pascal, Oberon-07)
- C# (specially now with 7.2 improvements)
- Swift (RC is also a form of GC algorithm)
These are just the most well known ones, there are others if you feel like having a dive into SIGPLAN papers.
I can also provide examples, how to do it in any of them, if you wish.
I wrote a real-time 3D polygonal rasterizer in JS that supports texture mapping. After switching from regular arrays to Float32array to store coordinates, it got a huge (~50%) jump in performance.
With that being said, I recently ported a bunch of data heavy JS calculation code that had previously been running in the browser (and downloading about 100MB of data to the browser to do so - ouch) to run on a .NET back-end.
(Ignore the fact that doing this on the client will sound crazy to a professional software engineer but I would argue is totally legitimate in the context of a smart person who is not a software engineer building a prototype with tools they understand, which is what happened.)
Now, in doing the port I did improve and optimise the way the calculations were performed: basically I eliminated some redundant operations and structured the data up-front in a way that made it easier to work with. This in itself only sped things up by a factor of 2 or 3.
What really gave me the performance boost was taking advantage of the strong typing that languages such as C# offer. Just switching over to using strongly typed dictionaries via generics, and strongly typed arrays, having initially used IDictionary<string, object> (this, because I just wanted to get it working to start off with, and wasn't sure I'd always be working with the same type of data), gave me an extra 75 to 100 times performance boost.
In the end the C# version outperformed the JS version by a factor of around 200, mostly due to being able to take advantage of strong typing. And here I'm comparing the classic .NET 4.6 CLR to the latest version of Google Chrome, both running on the same machine.
If you do need to run that sort of code in the browser, and you want consistent performance across browsers, then WebAssembly might be.
 As long as you don't need to support IE11. If you do need to support it I'd suggest you find a way to avoid running large amounts of JS since, whilst its performance is definitely the best of any version of Internet Explorer, it's still not what you'd call fast and doesn't do a great job of memory management.
This is the kind of code that in previous generations would have expected to see a 20-100x penalty in perl or python.
How times have changed. I remember running flight sims back in the 16-bit era (Amiga 500, for example) when 10fps would have been considered pretty good, especially if there was a lot going on.
But yes, I think nowadays you're right: 60fps or bust. And this is really why ray-tracing isn't used that much in games - it's just so computationally expensive.
The other three don't seem to work though.
These work fine:
That kind of speed up isn't just a fluke here, it's pretty normal when your work aligns with what a GPU does best.
I mean: https://www.shadertoy.com/view/Mt2yzK
Still though, the potential for using GPUs to accelerate computations is high and I think too often overlooked by web developers, even those doing image processing.
I don't mean how to use a tool chain to compile higher level languages to WASM. I mean how to write the linear assembly bytecode as seen here on Wikipedia:
Like someone scribbled the colors over the circles instead of filling them.
It declares that floats should only be medium precision by default, but those precision specifiers are only followed on mobile platforms.
Desktop WebGL implementations just promote everything to high precision so that kind of bug is invisible until you test on mobile.
Not to forget wasm is in very early stage, once we get threading support(https://github.com/WebAssembly/threads) we can get much better results.
Colo(u)r me impressed.
I'm getting 0.1 versus 125 for WebAsm versus ASM.js/JS of ~210 – kind of crazy that Chrome can run plain JS as fast as ASM.js (or is it?)
Firefox (57) can't run Shader unless there's some under the hood option I have to turn on …
The graphics section in about:support might tell you your WebGL status.
Just a Dell 13 inspiron laptop with latest Chrome 63.
It's kind of hard to judge this without having a decent enough look at the code, or how it is being benchmarked. For example, asm.js requires quite a bit of set-up to intialise; if it's restarted from scratch every loop for benchmarking purposes that would mess up the timing.