
Maybe you don't need Rust and WASM to speed up your JS - youngtaff
http://mrale.ph/blog/2018/02/03/maybe-you-dont-need-rust-to-speed-up-your-js.html
======
anp
I think this is an interesting exploration because I really enjoyed the
description of profiling and improving performance, but I came away feeling
exactly the opposite of the title. I think it's really cool that JS
optimization can provide so many wins, but this article makes it seem fairly
fickle and if they're not familiar with VM internals I would not expect most
developers to complete this journey. Using wasm+rust/c++ is interesting in
part because you can get so much more predictable performance out of the box.
When the performance comes from language features instead of VM-specific
optimizations the performance might be easier to maintain across revisions and
hopefully will be less subject to regression from VM changes.

~~~
mraleph
[Thank you for reading the post! I am glad you enjoyed it]

All optimizations in the post can mostly be divided into three large groups:

1) algorithmic improvements; 2) workarounds for implementation independent,
but potentially language dependent issues; 3) workarounds for V8 specific
issues;

You need to think about algorithms no matter which language you write in, so
we don't need to talk much about the first group. In the post it is
represented by sorting improvements (sorting subsequences rather than the
whole array) and by discussions of caching benefits (or lack of them there-
off).

The second group is represented by a monomorphisation trick: the fact that the
performance suffers due to polymorphism is not really a V8 specific issue and
it is not even JS specific issue. You can apply this approach across
implementations and even languages. Some languages apply it in some form for
you under the hood.

The last group is represented by argument adaptation stuff.

Finally an optimization I did to mappings representation (using typed array
instead of an object) is an optimization that spans all three groups. It's
about understanding limitations and costs of a GCed system as whole.

Now... Why did I choose the title? That's because I think group #3 represents
the issue that should and would be mostly fixed over time. While groups 1 and
2 represent universal knowledge that spans across implementations and
languages.

Obviously it is up to each developer and each team to choose between spending
N rigorous hours profiling and reading and thinking about their JavaScript
code, or to spend N hours rewriting their stuff in a language X. What I want
is:

a) that everybody was fully aware that the choice even exists; b) language
designers and implementors worked together on making this choice less and less
obvious - which means working on language features and tools and reducing the
need in group #3 optimizations.

~~~
vanderZwan
> _monomorphisation trick_

Here's a crazy thing I recently learned: apparently monomorphism isn't just
"object with identical keys", apparently (at least in Chrome), _the order in
which you declare those keys matters_. According to this presentation from
2015[0], adjusting the following lines in the Octane/Splaytree benchmark so
that node.left and node.right are always assigned in the same order resulted
in 15% better performance:

    
    
        var node = new SplayTree.Node(key, value);
        if (key > this.root_.key) {
          node.left = this.root_;
          node.right = this.root_.right;
          ...
        } else {
          node.right = this.root_;
          node.left = this.root_.left;
          ...
        }
    

Now, I assume that this out-of-order thing was actually done on purpose, to
benchmark how the JIT handles code like this. Further evidence for that is
that the SplayTree constructor[1] does not feature a _left_ and _right_ key
either:

    
    
        SplayTree.Node = function(key, value) {
          this.key = key;
          this.value = value;
        };
    

Still, I wouldn't be surprised if it was common for real-life code to
accidentally have objects that _should_ have the same hidden class end up with
different ones because of this.

[0]
[http://mp.binaervarianz.de/fse2015_slides.pdf](http://mp.binaervarianz.de/fse2015_slides.pdf)

[1]
[https://github.com/chromium/octane/blob/master/splay.js#L390](https://github.com/chromium/octane/blob/master/splay.js#L390)

~~~
ridiculous_fish
Yes, this is because the JS spec requires that object keys are iterated in
insertion order (with a bizarre exception for arrays).

~~~
pdpi
Wait what? Since when? That wasn’t the case at all 5 or 6 years ago (I got
bitten in the arse by Rhino not implementing it that way)

~~~
epmatsw
Most browsers already did, and some (not all) of the ES2015 stuff requires
property ordering. Probably easier to just keep everything ordered since it
could be needed by those methods.

------
greenhouse_gas
The problem with these tricks is that they're dependant on V8 internals, which
means that there's no guarantee that it will be fast on Firefox (or edge, or
even Safari).

So now if google changes the back end, websites will slow down, which means
that Google has to ossify implementation details, furthering this sort of
black magic into CS lore forever.

This, honestly, is what I hate with modern dev. A few days back there was a
discussion on how programming is hard nowadays.

Really, programming is more accessible than ever. What is hard is the "black
magic" that's becoming more and more prevalent, most of which is
implementation defined, that everyone is expected to know (and really, most
don't really know anything. They just repeat rumors they read about online,
often years out of date).

~~~
kibwen
The article does mention SpiderMonkey, though not Safari or Edge, and the
point does stand that profiling four different JS JITs to guarantee that you
trigger their optimizations is always going to be more work than profiling one
C++/Rust program generated by a single compiler toolchain (though it remains
to be seen how much individual browsers' implementations of WASM will diverge
in runtime optimization potential).

~~~
abritinthebay
They will diverge about as much as JS does, that will hardly be a surprise

~~~
kibwen
I doubt it. If further optimizing pre-optimized assembly were that easy (or
that valuable), then we'd also expect to see more tools for postprocessing
compiled binaries. (Just because WASM is a bytecode format doesn't mean it's
comparable to Java bytecode and Java's JITs; javac isn't an optimizing
compiler.) I'd be happy if anyone actually working on a WASM interpreter could
chime in regarding expected optimization potential.

------
eslaught
> Is it better to quick-sort one array with 100K elements or quick-sort 3333
> 30-element subarrays?

> A bit of mathematics can guide us (100000log100000 is 3 times larger than
> 3333×30log30)

You have to be careful doing this kind of analysis. Big-O is explicitly about
what happens for large values of N. It is traditional to leave off smaller
factors, because they don't matter at the limit of N going to infinity. There
is implicitly a constant coefficient on each component, and that might matter
more at small N.

So e.g. I've seen cases that were O(N^2 + N), and which of course you'd
traditionally write as O(N^2), but where the O(N) factor mattered more at
small values of N because of constant factors. Depending on whether you cared
more about small or large values of N, would guide whether you'd actually want
to go after the O(N^2) factor or not. If you just blindly went for the larger
factor, you could waste a lot of time and not actually accomplish anything.

------
ef4
The problem with this kind of deep dive optimization is the cost of
maintaining it in a long-lived project as the underlying javascript engines
keep changing. What was optimal for one version of V8 can actually be
detrimental in another version, or in another browser.

It's precisely the unpredictability of JIT-driven optimizations that makes
WASM so appealing. You can do all your optimizing once at build time and get
consistent performance every time it runs.

It's not that plain Javascript can't be as fast -- it's that plain Javascript
has high variance, and maintaining engine-internals-aware optimization in a
big team with a long-lived app is impractical.

~~~
seeekr
It seems to me there is no reason we shouldn't be able to create an
"optimizing babel" that could be doing performance optimizations based on the
target JS engine and version, as a build step. I don't think we need to go to
a completely different source language and a compilation to WASM in order to
get permission to create such an optimization tool.

Such a tool would give you the benefits you're praising about the WASM
compilation workflow: Separately maintained, engine-specific optimizations
that can be applied at build-time and don't mess up the maintainability of
your source code.

~~~
fenomas
JS engines already do insane levels of optimization, and they do it while
watching the code execute so they understand the code better than any
preprocessing tool can hope to.

What could a tool like you're describing do that the engines don't do
themselves?

~~~
adrianN
I assume that JITs don't do very expensive optimizations because they have to
do a trade off between execution speed and compilation time. JITs are also
fairly blind on the first execution of a piece of code. Static optimizations
are not made obsolete by the existence of JITs.

~~~
fenomas
Expensive optimizations like what? Can you give a before/after example of
something such a tool might do?

(Note: I'm glossing over the case where one is using bleeding-edge syntax that
a JS engine doesn't yet know how to optimize. In that case preprocessing out
the new syntax is of course very useful, but I don't think this is the kind of
optimization the GP comment was talking about.)

~~~
seeekr
JIT engines usually don't do static analysis. I'm not sure if that is because
the cost for that is that much higher, but a hint towards why could be that
the engine simply does not know which parts of the (potentially huge amounts
of) code that was loaded is actually going to be needed during execution, so
analysing all of it is likely to bring more harm than gain.

As an example for something that static analysis could have caught, take the
example from the article about the "Argument Adaptation"[0]. Here the author
uses profiling to learn that by matching the exact argument count for calling
a function, instead of relying on the JS engine to "fix that", the performance
can be improved by 14% for this particular piece of code. Static analysis
could have easily caught and fixed that, essentially performing a small code
refactoring automatically just like the author here did manually.

[0] [http://mrale.ph/blog/2018/02/03/maybe-you-dont-need-rust-
to-...](http://mrale.ph/blog/2018/02/03/maybe-you-dont-need-rust-to-speed-up-
your-js.html#optimizing-sorting---argument-adaptation)

~~~
fenomas
I replied in main to your other comment, but regarding the "Argument
Adaptation" issue it's maybe worth noting that I'm 90% sure V8 would have
optimized this automatically if not for the subsequent issue (with
monomorphism). I'm dubious that the former issue could be statically fixed as
easily as you suggest, but either way I think it should be considered a
symptom of the latter issue.

------
solidsnack9000
This is a great article and goes to show that statements like "X language is
fast" are a little blurry when you put the language in the hands of a skilled
developer, who can go beyond the standard idioms and surface level
understanding of a language to use it like it's another language.

At the end of the article, the author wisely chooses to move some objects out
of the control of the GC:

 _We are allocating hundreds of thousands Mapping objects, which puts
considerable pressure on GC - in reality we don’t really need those objects to
be objects... First of all, I changed Mapping from a normal object into a
wrapper that points into a gigantic typed array..._

This suggests to me that we are no longer programming the way one usually does
in a dynamically typed, garbage collected language -- and thus it might still
be the right decision to move to something like Rust (or Swift or Go) where
there is considerably more control over allocation.

The author is able to achieve a speed-up of ~4x which is close to the 5.89x
achieved by the Rust implementation. There are benefits to having all ones
code in the same language; but there are also benefits to switching languages
to obtain better ergonomics and safety properties.

~~~
jandem
> The author is able to achieve a speed-up of ~4x which is close to the 5.89x
> achieved by the Rust implementation.

This is not a fair comparison because the author made algorithmic improvements
that would also improve the wasm version if applied to the Rust code. Source:
[https://twitter.com/mraleph/status/965616993310265344](https://twitter.com/mraleph/status/965616993310265344)

~~~
mraleph
FWIW, quick look at Rust code actually reveals that that implementation is
also different algorithmically from what I was optimizing, e.g. it only sorts
generatedMappings array.

In reality it means that performance of my code should not be that far from
what WASM is showing because sorting of originalMappings (which I do eagerly
and WASM version does lazily) is one third to one half of the overall runtime.

I will try to measure and update the post tomorrow or Wednesday.

------
themihai
I wonder if the performance gain is worth the effort. Surely you can always
squeeze more performance from JS like from any other language but what's the
point if you spend hours to match the performance you get for "free" from
other languages?

As far as WASM is concerned I'm more excited about the possibility to run any
programming language on the web than the raw performance gains. So far it is
still year(s) away from this goal(i.e. lack of web APIs/DOM access makes it
useless for web dev).

~~~
gear54rus
> hours to match the performance you get for "free" from other languages?

In a sense, you pay those hours that you saved up by using an easier, more
intuitive language like JS. Or you can choose not to and still have pretty
good performance and blazing fast dev cycles.

~~~
thefounder
I believe there are other languages easier and more intuitive than JS. The
lack of DOM and GC support in WASM is the only reason JS is still ruling the
web.

As far as the performance is concerned I believe profilling the internals of
the VM(as the author does) is not a good fix on the long term.

~~~
ShinTakuya
I agree regarding JS not being the easiest or most intuitive language, but I
don't see JS going away any time soon and I don't think that's a bad thing.
The last 5 years or so have seen JS transform from one of the most frustrating
languages to one of the better ones. I think Typescript is where the future of
web dev lies. Typescript for your main logic and WASM for optimising the parts
that need it.

~~~
thefounder
Yeah, JS is better now but I bet we could do even better starting with a clean
slate. Web dev shouldn't be tied to a single programming language. How would
you like using PHP and its derivates(i.e. Hack) for everything web APIs/back-
end? I could tell you that it made great progress since PHP 4.

~~~
ShinTakuya
Oh I'm sure we can do better. Would it be worth throwing away decades of
libraries, many of which have ironed out many subtle logic bugs over time?
That's a different argument. Not going to say it's not worth it (there is a
lot of mess that could be removed if we started from scratch).

It's the same argument any junior engineer makes when they start at a company
with a legacy codebase. They always want to start from scratch but don't
realise the cost or the fact that while they'll avoid 100 mistakes of the
past, they'll also make 100 new ones. Why not just make JS better? It's
already happening year by year.

------
lmm
I'm excited about compiling to WASM not for performance but for correctness.
Typescript is better than nothing but it really can't compete with the safety
and ease you get from a language with a really good type system.

~~~
faitswulff
One thing I've been confused with about transpiled languages is how does
correctness transfer? How do types or lifetimes etc. transfer to WASM, for
instance? Or does it just depend on the correctness to be verified in the pre-
compilation/transpilation stages?

~~~
ehsanu1
Correctness is typically checked long before code generation happens. This
does mean the code generation backend must be trusted, whether the target be
x86 or wasm. Just as you have to trust CPUs to not expose processes' memory to
each other (oops).

~~~
eastWestMath
Typed assembly is a thing - it’s mainly to ensure the compiler itself is
correct, but it does give stronger guarantees. I think Frank Pfenning at CMU
has written a language that’s dependently typed all the way down - so it’s IR
and assembly are both dependently typed.

------
patientplatypus
So...

I'm a (junior/so-so) react dev. I like the language, I probably could get
better at it but I've gotten to the point where I like the sound of my own
music.

That said, my question is the following. There seems to be a lot of resources
on the web of the type "Hey Rust and WASM is a thing! You can make webpages
with it". Ok, fine. However, I don't see a lot of the things that are in
libraries like Vue and React that offer me SPA, fast development time and
component (OR!) functional pattern design (I won't mention the ability to add
npm packages, because yeah, Rust/WASM doesn't have an ecosystem yet, so that
might be punching a bit below the belt). Is there anyone out there making a
React like or "eco-system builder"-esque platform that would give me a lot of
the benefits I'm seeing with React but with increased performance?

Also, I've done some low level programming before and think that Rust is very
cool for that (yay memory safety! yay error messaging (no _seriously_ yay)!).
However, and I may be betraying my ignorance here, if I want to animate a div
to fly across the page, am I going to have to write _lots_ of low level code
to do that? If that's the case, and development time suffers to eek out that
extra bit of performance, I can't see this as having much utility outside of
niche fields like game development.

I don't mean to be overly critical mind as I'm still (and probably will always
be) a bit of a n00b. I'd love it if somebody would point me at some resources
that I could burn a weekend on, if I thought the juice was worth the squeeze.
I'm just not sure I know enough to know if this is something that I could be
productive in (some day).

~~~
steveklabnik
> "Hey Rust and WASM is a thing! You can make webpages with it". Ok, fine.

It is very early days. More to come...

> However, I don't see a lot of the things that are in libraries like Vue and
> React

Yes. There's not too many of these yet; there are some though. For example,
[https://github.com/DenisKolodin/yew](https://github.com/DenisKolodin/yew)

There's also the inverse: can we re-write parts of libraries like Vue or React
in something that compiles to wasm, so that you get better performance as the
user? I know of at least one major SPA framework that's doing this. Don't want
to spill the beans too much, even though it is technically public knowledge.

That said, that's how I think wasm will impact the lives of most JS
programmers: as technology that underpins the libraries they use to make them
better. Unless you want to, or unless you want to write a high-performance
library, I wouldn't expect wasm to really change the way most JS programmers
operate. It's abut augmenting JS, not replacing it.

> if I want to animate a div to fly across the page, am I going to have to
> write lots of low level code to do that?

That depends entirely on the library!

> I'd love it if somebody would point me at some resources that I could burn a
> weekend on

They're sorta scattered all over the place right now. Such is life for early
adopters. More will come as stuff matures. Don't under-estimate how much this
will change as the tooling gets built out, for example.

~~~
spicyj
> Can we re-write parts of libraries like Vue or React in something that
> compiles to wasm, so that you get better performance as the user?

For React, we would love to do this although it's not clear what parts of
React would benefit from being moved to wasm right now.

~~~
steveklabnik
Yeah, that's what I've been hearing through the grapevine. We'll see! I'd love
to see it.

~~~
spicyj
Call us up if you have any advice!

------
austincheney
I thought all the benchmarks indicated that WASM is still slower than JS. That
being said the only performance improvements offered by WASM is if it doesn't
for garbage collection for consistence execution.

WASM isn't about performance. It is about writing applications in any language
and importing those applications into an island in a web page.

~~~
madflame991
> WASM isn't about performance. It is about writing applications in any
> language and importing those applications into an island in a web page.

You don't really need wasm for that, do you? Anything compiles to JS nowadays
and you'd actually get easier access to the DOM and GC and support for source
maps (are those working for wasm yet?)

~~~
austincheney
Flash has always been better for media and games than JavaScript. WASM fills
that niche.

------
bitL
So much work to achieve what should be default behavior :-/

How did we end up in wasting so much time on trivialities?

~~~
sametmax
When JS came out, it was poorly designed, but nobody cared because we used it
only to make snowflakes appear on the web page.

Then MS gave us AJAX and 37 signals made it popular, until apps like gmail
made it so mainstream it was impossible to go back to old static pages.

But it was too late. The shitty language we had was the only one available
everywhere to do dynamic web pages now.

IE would not move, and Firefox and Opera were the underdog, spending their
resources on more important things. So nobody tried to implement a better
existing language.

When Google faced the challenge of creating chrome, they had to be compatible.
So instead of implementing a better language, they also used JS, and injected
millions of dollars into making the V8 so it has decent performances.

After that, JS was usable, and so we moved on.

~~~
petre
> So instead of implementing a better language, they also used JS

They did implement a better language, Dart. But it was too late, like you
said. Maybe we'll have a change of using it for mobile apps with Flutter.

~~~
sametmax
Dart came way, way later. And it was a new language, while they could have
just used an existing one. They could have implemented lua, ruby, Python,
anything with a good track record. With the millions spent on V8 with those
geniuses working on it, can you imagine how fast one of those language would
have become ? The tooling it would have had ?

~~~
kjksf
Actually, Google tried to make Python faster, with Unladden Swallow project.

They failed miserably.

And several other projects failed miserably at the same thing as well.

Either way, speed was not the main reason they created Dart.

Dart was designed for writing large web apps.

Dart's top design constraints was good interop with JavaScript meaning
transpilation of Dart to decent JavaScript and consumption of existing
JavaScript libraries from Dart.

I repeat: top design goal.

You can't take a language (be it Python, Ruby or Lua) that was not design with
that constraint in mind and magically make it work.

There are transpilers from those languages to JavaScript but they are toys.

You just can't reconcile Python semantics with JavaScript semantics in a
reasonable way.

So they did the next best thing: they designed a better Java/C# while keeping
JavaScript interop as a priority.

Now Dart is morphing because the top design goal is to make it the best
language for cross-platform mobile (i.e. iOS and Android) apps, which requires
different trade-offs.

~~~
sametmax
Pypy is proof that you can speed up Python considerably. And it's written in
Python. Imagine what could be achieve if one would write something like that
in, say, Rust ?

My guess is that they didn't invest anywhere near the resources on unladden
shallow (which was probably a side project) than they invested on chrome vm
(which was a core project).

Actually, I spend quite a lot of time on the Python mailling list, and here
you can see they regularly find things to improve, perf wise. They just have
terribly low resources.

I've rarely seen a project as popular as Python, used by so many rich huge
corporation, which such little resources actually. It's heart breaking.

------
NiceGuy_Ty
This is a well written article, and I absolutely agree with the idea that
profiling and analysis is more important than language choice in order to eke
out performance wins.

That being said, some of these optimization techniques completely took me by
surprise. Defining the sort function as a cache lookup that converts the
sorting template to a string and then builds an anonymous function out of that
string which is finally used as the exported function seems, to me, like an
extremely roundabout way to achieve inlining the comparator. And the argument-
count adapter having such high overhead on V8 seems like something that should
generate a warning for the developer.

The cache analysis and algorithmic improvements seemed fairly straight
forward, but when you're at the point of having to manually implement memory
buffers to alleviate GC pressure, you're diving below the level of abstraction
that the language itself provides you. At that point, I think the argument to
switch to a language designed to operate at that level of abstraction holds
some sway.

------
FeepingCreature
What a well-designed and well-researched article!

------
z3t4
We should be more wary about premature optimizations, like in the article
where caching in the original code made it slower! Always measure! Write
_naive_ code and measure, the JavaScript engines are very good at
optimization, especially V8 and the others are catching up.

However when I do optimize JavaScript code I often get 10-100x performance.
Usually by writing _better algorithms_. Eg no "black magic". So the original
code in the article is not that bad, considering he "only" got 4x performance.

Moving to another programming language / WASM for less then 2x performance _is
not worth it_ \- unless you hate JavaScript.

~~~
Narishma
> like in the article where caching in the original code made it slower!

Caching could have been originally faster, but became slower thanks to the
continual improvement of the Javascript VMs.

------
vanderZwan
Tangent: I see that the author focused on improving sorting algorithms, and
also at some point switches to a Uint8Array (although not in the sorting
part).

I recently discovered that a JavaScript implementation of radix sort up to
four times faster than the the built-in sorting algorithms for
TypedArrays[0][1][2]. Imagine how much faster a good WASM implementation could
be!

It also makes me wonder why browsers don't make use of the guaranteed memory
layout of typed arrays to use faster native algorithms. Sure, Typed Arrays
have to support the sorting API, which is comparison based and works with
closures. But why not detect when someone calls sort() without any further
arguments, and make that very common case use faster native code?

Because for me, this difference in performance made a difference: I am
animating plots where I need to sort more than 100k elements each frame, and
sort eating up 5ms or 20ms is the difference between choppy and smooth
animations.

[0] [https://run.perf.zone/view/radix-sort-uint32-1000-items-
type...](https://run.perf.zone/view/radix-sort-uint32-1000-items-typed-and-
plain-array-1510082514775)

[1] [https://run.perf.zone/view/radix-sort-
uint32-1000000-items-t...](https://run.perf.zone/view/radix-sort-
uint32-1000000-items-typed-and-plain-array-1510082642142)

[2] [https://run.perf.zone/view/Radix-sort-Uint8Array-loop-vs-
fil...](https://run.perf.zone/view/Radix-sort-Uint8Array-loop-vs-fill-in-
place-1510007933053)

~~~
spiralx
On my OnePlus 5T the standard sort() was twice as fast as the radix sort for
the UInt8 array, but it was only 30% slower for the Float64 and regular
arrays, whereas sort() was hundreds of times slower - strange...

------
vortico
This is a broad, naive question, but the number of responses and upvotes on
this post suggest to me that many people actually _need_ to speed up their JS.
I've never once come across this problem in web app development. The
bottleneck is always DOM rendering like layout changes, networking, handling
large WebGL vertex buffers for video games, etc. In which use cases is JS
performance significant?

~~~
acemarke
In this case, the sourcemaps library isn't normally running as part of user
application code. It's primarily used by the browser's DevTools, and server-
side build tools like Webpack and Gulp.

The faster sourcemaps can be parsed and manipulated, the faster the DevTools
and build tools will execute.

------
sehugg
I rely heavily on a decent JS performance baseline for
[http://8bitworkshop.com/](http://8bitworkshop.com/), and I also rely on
asm.js / WASM. But I need different things from them.

For JS, I need consistency and stability, because I'm dynamically generating
code. Usually I'm pretty satisfied, but sometimes after recompiling code I get
a huge performance hit for no reason.

For WASM, I know I have stability, but I need faster load times. On my
Chromebook, for example, it takes 10-20 seconds just to load the WebAssembly.
If this problem is solved, I might move everything performance-sensitive over
to WASM eventually.

------
staticassertion
Awesome. I think it makes a lot of sense to explore new algorithmic approaches
before choosing to reimplement in a new language - and thankfully these are
not mutually exclusive.

To those saying "these are implementation defined optimizations" etc - you do
the same exact thing in rust. I know some rust code is 'fast' and some is
'slow' and I have to understand rust and to some extent the state of llvm +
rust. This is simply part of writing fast code, no matter the platform or
language.

Nice writeup!

------
eximius
I'm continually surprised by the Rust -> WASM pressure. I see projects like
Redox and Servo and ripgrep and Tokio and, well, native or systems level
things being it's true calling.

I don't want to write a webapp in Rust. It'll always be second class (though
maybe that won't be a problem if the WASM apis get good enough...).

~~~
steveklabnik
Think of it this way: wasm is pretty similar in many ways to an embedded
platform. Rust wants to be good at embedded, so making Rust good at wasm fits.
A lot of the work is identical.

~~~
eximius
I think that is a very post-hoc rationalization to justify the silliness that
is/has been eating the JS world (emscripten and asm.js show that this sort of
thing has been going on for a while), but is an entirely valid reason to
indulge it for free interest and experience.

+1

~~~
steveklabnik
I don't think it's a reason that wasm exists, but I do think it's one of the
(multiple) reasons for us to invest in making Rust -> wasm an excellent
experience.

------
hokkos
Maybe there should be a tool from linters or VMs to warn against arguments
adaptation, monomorphisation. For the rest it is a lot of know how is JS that
is for free in with more performance minded languages. At the end with the
optimized JS is is 4 times faster, but still 6 times faster in WASM it seems.

~~~
bcoates
It's only necessary in edge cases of super-hot code, avoiding argument
adaptation "because performance" is a bad idea.

It's also a v8 specific weakness, it's not hard to imagine a small fix getting
rid of the performance penalty.

~~~
kjksf
Given that this contrasts a JavaScript implementation with Rust implementation
compiled to WASM: my 5 minute investigation shows that Rust doesn't have
optional arguments at all.

So in Rust you would be forced to write it the way you consider to be "bad
idea".

Many people think that default function arguments are a bad idea and languages
like Rust or C or Go or Java don't even have them.

~~~
qarioz
you get creative, passing map, passing Option. it's very trivial to do.

------
fulafel
Next steps: leverage web workers and GLSL?

------
hitekker
Irrelevant and maybe a bit small-minded, but the font of the article is too
squished to be pleasantly readable.

~~~
mraleph
Not irrelevant. Question for you, is this more legible:

[https://twitter.com/mraleph/status/965686742614462466](https://twitter.com/mraleph/status/965686742614462466)

I can update CSS if that helps.

UPD. Updated CSS to use non monospace fonts for the body.

~~~
hitekker
This is better by far. I'd be interested if there's a better combination of
code + paragraph font, but otherwise the only other minor adjustment I'd make
is increasing the padding / margins between text and code.

Thanks for taking criticism in a healthy way.

------
alangpierce
For writing high-performance JavaScript, I'm really hoping that AssemblyScript
takes off[1]. The project is still a bit in early stages, but theoretically,
it can have the same role as Cython: you write JS/TS code, find the slow
parts, and get perf improvements by adding stricter types to your code and
changing your code to not use dynamic JS features. You can stay pretty much in
JS rather than having to switch to a completely new language to get
predictable fast performance.

[1]
[https://github.com/AssemblyScript/assemblyscript](https://github.com/AssemblyScript/assemblyscript)

------
jokoon
For me the purpose of wasm would not be just performance, but to let me avoid
javascript entirely.

Javascript is already very fast, but compiling to javascript feels awkward.

~~~
mraleph
I don't think you can avoid JavaScript even with WASM - and WASM in it's
current form is, for some languages, a worse compilation target than
JavaScript.

~~~
jokoon
Well sure it doesn't support the DOM yet, but I think the design goal of WASM
is to do just that.

Since the beginning of computers, it should be trivial to distribute programs
online efficiently.

The web is already platform dependent, but it also need to be fast and
language independent.

------
bitdivision
I can't see any numbers comparing the rust implementation and the optimised
javascript. Did I miss them in the article?

~~~
steveklabnik
A comment upthread did the math:

> The author is able to achieve a speed-up of ~4x which is close to the 5.89x
> achieved by the Rust implementation.

~~~
mraleph
The math is not that obvious because Rust implementation does not match 1-1
what baseline JS version was doing (e.g. it sorts only generatedMappings and
not originalMappings).

I will do measurements later to compare and update the post.

~~~
steveklabnik
Sounds good!

I enjoyed the article :)

------
adamnemecek
It's not just speedup it's also about writing backend and front-end in one
language. But the speedup is real too.

------
abritinthebay
I’m confused as to why people think this isn’t generally applicable to JS
here.

Very few of these changes are VM specific or likely to change (or any more so
than WASM implementations)

\- choose your algorithm carefully \- make sure you’re paying attention to the
data you’re applying the algorithm to is a good fit \- pay attention to arity
& GC pressure.

None of these are hard to do in JS & most of the VM debugging was to help
identify problems in _existing_ unoptimized code.

The rest are lessons you can take into ANY JS data processing.

------
Animats
Source maps are a debug tool. Why does the performance matter?

(And if you're shipping so much JavaScript to your site users that you need to
"minify", maybe you're doing it wrong.)

~~~
orf
Tools like Sentry parse them, and need consistent fast performance in this
area. Other debugging tools use them.

Your statement in brackets seems very "those damn kids"-ish. You need to
"minify" your JS files even if you're shipping a small amount of JS, because
loading performance matters and JS minifies very well.

~~~
masklinn
Browsers also use them do display "expanded" code in the console and debugger.

