I work on SpiderMonkey JS performance (and got mentioned in the article for fixing a perf bug!) and this is exactly the point I wanted to make. If you're not a VM engineer, it's much harder to write fast JS. Wasm also gives you much more predictable performance across engines, without requiring (often engine-specific) 'hacks' like function cloning.
From what I can tell, doing basic things like a property get on an object in WASM/host-bindings will require a spill to function, with no chance of specialization:
That comment (which I made) explains why probably the best we can expect for wasm accessing normal JS objects is the same level of speed as a modern JS engine's "tier 1"/"baseline" JIT. But the Rust code discussed in the OP wasn't accessing JS objects, it was accessing Rust memory which lives in wasm's "linear memory", so I don't see how my comment really disagrees with what jandem or dikaiosune are saying.