
The Baseline Interpreter: A Faster JavaScript Interpreter in Firefox 70 - edmorley
https://hacks.mozilla.org/2019/08/the-baseline-interpreter-a-faster-js-interpreter-in-firefox-70/
======
the_duke
The Mozilla tech blogs are always a good read.

Informative and easy to digest. I can only think of one other company blogging
with similar consistency/quality: Cloudflare.

Firefox performance has seen tremendous gains since their Project Quantum
efforts.

It mostly feels on par with Chrome for me pretty much everywhere.

There is one glaring omission though: a single company where I have problems
with FF on multiple apps - Google.

Gmail was still dog-slow the last time I tried, Youtube can send my fan into a
frenzy and Docs is also regularly problematic. (on Linux)

~~~
chao-
I have the opposite experience with YouTube.

With an AMD GPU on Linux, Chrome churns, drops frames, and stutters horribly
on even 1080p videos at 30fps. Firefox, meanwhile, handles 1440p60 video with
no problem. Even two such videos simultaneously, on separate monitors.

Chrome became inexplicably better for parts of early 2019 (I did not note
which versions), but starting 2 or 3 months ago Chrome returned to being
unusable for YouTube beyond 720p videos.

Before anyone asks, this is on an AMD RX 580 with Mesa 18.0.

~~~
usepgp
Hey, I'm on the chromium videostack team, would you mind filing a bug report
for what you see happening with youtube?
[https://bugs.chromium.org/p/chromium/issues/entry](https://bugs.chromium.org/p/chromium/issues/entry)

~~~
nwallin
Fixing this would be like eight steps forward:
[https://bugs.chromium.org/p/chromium/issues/detail?id=137247](https://bugs.chromium.org/p/chromium/issues/detail?id=137247)

The annoying part is that hardware acceleration works on chrome OS, so we know
the support is buried in there somewhere.

~~~
mehhh
Google closed that as Won't Fix, they are very opinionated and steadfast in
their opinion as an organization.

I doubt they will take Linux outside their own walled gardens seruously,
Google has already shown their indifference to us Linux users.

~~~
Dylan16807
They _opened_ it as Won't Fix, which appears to be their way of saying that
they'll accept work but it has a priority of less than zero.

------
pizlonator
Worth noting that this basically means that all of the JS engines are
converging on what JavaScriptCore pioneered:

\- More than just two tiers (JSC has four, and I guess Moz has four too now; I
guess it's just a matter of time before V8 follows).

\- Bottom tiers must include a fast interpreter that uses JIT ABI and collects
types (Ignition and this Moz interpreter smells a lot like JSC's LLInt, which
pioneered exactly this for JS).

It's weird that they kept the C++ interpreter. But not too weird, if the IC
logic in the new interpreter is costly. In the LLInt, the IC/type logic is
either a win (ICs are always a win) or neutral (value profiling and case flag
profiling costs nothing in LLInt).

Also worth noting that this architecture - a JIT ABI interpreter that collects
types as a bottom tier - is older than any JS engine. I learned it from
HotSpot, and I guess that design was based on a Strongtalk VM.

This is the current state of the art of JSC's bottom interpreter tier FWIW:
[https://webkit.org/blog/9329/a-new-bytecode-format-for-
javas...](https://webkit.org/blog/9329/a-new-bytecode-format-for-
javascriptcore/)

~~~
The_rationalist
Are there any solid benchmarck comparing JSC vs v8?

~~~
pizlonator
JSC has had a set of large metabenchmarks like JetStream2 for a while. We have
been the fastest for a while but V8 is gaining, so the current advantage is
significant but not earth shattering.

The last time V8 had their own benchmark, they retired it right after we beat
them on it and made a post saying that benchmarks are bad because people cheat
on them.

Around that time I stopped seeing google.com claim that I should switch to
Chrome “because it’s faster”.

So basically JSC is fast enough to have made V8 ragequit benchmarking. Hope
that answers your question!

~~~
kevingadd
I've always wished I could actually benefit from JSC without dropping a couple
thousand dollars on a macbook and having to put up with the world's worst
keyboard, because your browser team kicks ass :)

At least you're pressuring Mozilla and Google to step up their game on
runtimes!

~~~
floatboth
it's not Mac exclusive. WebKitGTK uses JSC too

------
_bxg1
Something I'm curious about is why JS developers don't have the option to send
along some of this information themselves; type information, JS bytecode, etc.
Given that we often have it on hand already (TypeScript) or could integrate it
into our build process (webpack). Obviously plain JS needs to still work
without all that, but it could be a compelling point of optimization for
large-scale apps. Perhaps the JS bytecode just isn't standardized across
browsers?

~~~
WorldMaker
JS tried to add (optional) type hints in the ES4 standard that was never
adopted (outside of tangential things like ActionScript 3).

It would be great if Typescript hints could pass right along to the JITs as
useful optimization factors, but it currently sounds like TC39 would prefer
not to recreate the disasters of ES4 and are staying out of type hints for the
forseeable future.

(Well-typed code should prevent most JIT bailouts, at least. Typescript
linters could possibly give even better "shape" warnings than they currently
do, such as catching the issue the React team recently found of bailouts in
the V8 engine due to "shapes" being built with "small integers" being
reallocated to doubles at runtime. However, such lint warnings would probably
be JIT engine specific, and maybe premature optimization in 90%+ of usages.)

~~~
_bxg1
Yeah it's probably not worth it to add types to JS the language, but what if
you could ship standard metadata files similar to source-maps that only
included type information, which browsers could leverage to speed up
compilation?

~~~
tomcam
Sounds good in theory, but it seems like rich territory for mismatched types
in the source and map files.

~~~
_bxg1
I don't see why that would happen; you wouldn't be writing these by hand

------
eternalny1
> Baseline JIT compilation is fast, but modern web applications like Google
> Docs or Gmail execute so much JavaScript code that we could spend quite some
> time in the Baseline compiler, compiling thousands of functions.

Good read. The above mentioned web apps are my biggest pain point at the
moment with Firefox. I still use Gmail in Firefox, but it's much slower than
running it on chromium. I just accept that, since Google makes all of that.

However, with Google Docs I actually switch over to a chromium browser, even
the new Edge beta, because it is simply too slow in Firefox.

It looks like they've taken notice of that and are tackling it head on.
Excellent work!

------
mikepavone
One thing that wasn't clear to me from this post is why they still need the
C++ interpreter at all. I assume there is some non-obvious cost that makes it
not worth it for the coldest of code, but I'm having a hard time guessing what
it may be.

~~~
jandem
> One thing that wasn't clear to me from this post is why they still need the
> C++ interpreter at all.

A lot of code on the web is very cold (executed once or twice) and for such
code the Baseline Interpreter would add some overhead (requires allocating a
JitScript storing the IC data for example and we would then spend more time in
IC code as well). It's possible this could be mitigated or fixed with
additional work, but we need to keep the C++ interpreter anyway (not all
platforms have a JIT backend and it's useful for differential testing) so it's
not a priority right now.

~~~
fbender
Can you run the baseline interpreter with the costly parts disabled? Even if
the code then runs approx. the same speed (and at same cost) of the C++
interpreter, you‘d save maintaining a bunch of code. I assume implementing the
missing backends offsets maintenance costs in the long term.

------
giancarlostoro
I'm waiting to hear about when they start to rewrite their JS interpreter in
Rust. It will make things kind of interesting, especially if it becomes a
stand-alone capable JS inrepreter.

~~~
oscargrouch
To be fair i think this is an exercize in vanity and why not say it,
stupidity.

Javascript VM´s these days are fairly complex beasts with huge amount of man-
hour and expertise. Also technically this would be a inferior solution as
compared to what V8 did with its interpreter in Turbofan (and SpiderMonkey are
doing now as presented in the article), by generating pure assembly through
the turbofan backend (the same technique used by LuaJIT before with great
success).

By the way lets not forget we are talking about Mozilla here, were a couple of
more misguided projects could mean the abrupt end of the organization who is
suffering to keep its market share in the browser wars.

There are a couple of things that could take some advantage for being re-coded
in Rust, but modern super-powered javascript VM´s are hardly one of those
things.

Of course, if in the sidelines someone craft a JS VM in Rust, and after IDK, 4
years, you have a mature enough JIT VM, maybe there will be a reason to move,
but Mozilla itself investing its unsustainable, limited and in the brink of
extinction funds on something that will require a lot of money and that in the
end will get you basically the same perfomance as the old C++ jit, its a
pretty bad move.

~~~
giancarlostoro
There's already a JIT for JS coded in Rust:

[https://blog.mozilla.org/javascript/2017/10/20/holyjit-a-
new...](https://blog.mozilla.org/javascript/2017/10/20/holyjit-a-new-hope/)

The overall goal is to have Firefox recoded in Rust, why would a JavaScript
interpreter be left out?

[https://wiki.mozilla.org/Oxidation](https://wiki.mozilla.org/Oxidation)

That's the whole point of Oxidation.

~~~
oscargrouch
Of course they can do it. I just dont think is a clever move for them to do
it.

Firefox is having a hard time to compete with Chrome, and they took too long
pursuing other goals, and just remembering one key feature they took a lot of
time to implement, was making Firefox a the multi-process browser as Chrome.

I mean, you would be ditching all this effort, rewriting it in Rust, spending
key resources only to have some parity with the same browser you already had
in C++, spending years, and loosing more market share, as Chrome can spend
this time in optimization and features.

For the record, i think Rust is what will save Mozilla, but it must reinvent
itself. Do the best they can with the firefox codebase they have, and use Rust
for new projects.

Like, creating Cloud, backend system software, and reinvent itself like Ubuntu
is doing right now.

They have to be very strategic and pragmatic right now. Two big moonshots,
Firefox OS and Rust, and only one of them has paid out their time and
resources.

If they want to bet in more moonshots, great, but they must do it in "blue
ocean" places, being more innovative on where they should use Rust, were Rust
can shine.

I just think that from a strategic (and even technical) point of view, they
are just spending precious resources, while at the same time eroding even more
their browser market share.

~~~
giancarlostoro
I see, I think we agree with each other, we are looking at things from a
different angle is all. Thanks for the clarification, I do want Mozilla to
succeed, we need other similar orgs to come out and be willing to compete with
tech giants like Google as well, maybe Apache, but they're too busy being a
giant corporate project dumping ground I suppose.

------
ledauphin
I've switched to Firefox 3 times in the last 2 years, and each time I've been
forced back to Chrome by horrendously (2-3x worse) bad battery life on my
MacBook, caused by some variant of this bug which Mozilla seems determined not
to fix.
[https://bugzilla.mozilla.org/show_bug.cgi?id=1404042](https://bugzilla.mozilla.org/show_bug.cgi?id=1404042)

~~~
opencl
What makes you think they're determined not to fix it? A commit that
significantly improves the situation landed in nightly a week ago.

[https://bugzilla.mozilla.org/show_bug.cgi?id=1429522](https://bugzilla.mozilla.org/show_bug.cgi?id=1429522)

~~~
ledauphin
the fact that they've not fixed it for literally years, despite having
hundreds or thousands of reports?

I'll believe they have a fix when I see it with my own eyes, which probably
won't be for another 6 months because switching my browser workflow isn't
something I want to do every couple of weeks to try out a new nightly with big
promises.

------
tom_mellior
The idea of generating an interpreter from the compiler is really neat.

What I missed in this article is _why_ the Baseline Interpreter is faster than
the C++ interpreter. The code snippet for the load zero instruction looks like
what a compiler should produce for a straightforward C++ switch case for that
instruction. _Except_ that the code uses a push instruction to store the value
directly on the system stack, whereas the C++ interpreter would presumably use
a more general store instruction into an array (in the heap, maybe) treated as
the interpreter stack.

Is that the difference, or am I missing something else?

~~~
jandem
> Is that the difference, or am I missing something else?

That's part of it. The generated interpreter should be a bit faster for simple
instructions because of the reason you give (also: things like debugger
breakpoints have more overhead in the C++ Interpreter).

However, the bigger speedups are because the generated interpreter can use
Inline Caches like the Baseline JIT. The C++ Interpreter does not have ICs.

~~~
tom_mellior
Ah, yes, inline caching probably explains it. Thanks!

------
needle0
I'm using Firefox for pretty much everything including Google apps, but the
one thing I do wish Firefox had is support for casting via Chromecast. In my
experience, Chromecast support has been sorta spotty even on Chromium-based
browsers like Vivaldi or Brave, forcing me to keep Chrome proper installed on
my PC for when I want to cast a YouTube video onto a bigger screen. Is this
bit of functionality too proprietary or entrenched enough that it cannot be
ported to any non-Chromium browser?

~~~
alex7o
You can use VLC to Chromecast, it is a little bit fiddly but it works most of
the time.

------
the8472
That sounds like the opposite of what graal+truffle is doing. With truffle you
write an interpreter and graal specializes it into a JIT.

------
The_rationalist
Would be nice to see benchmarck comparing v8 vs new spidermonkey!

~~~
nerdponx
Equally interesting would be a side-by-side architecture comparison.

------
fucking_tragedy
Awesome. I love the direction Firefox is heading now that Chrome is a pain to
use.

~~~
faitswulff
I was perfectly fine with Chrome's usability until I started getting a `Hold
Command + Q` to quit` prompt on my Mac. Before then, I hadn't even considered
the possibility that an application could block me from quickly and easily
quitting out of it. Now I have to hold the key combo or double tap it to quit
Chrome, and it is the _only_ application I have to do that for. It's so
annoying.

~~~
mikepurvis
I'm surprised that functionality isn't possible to disable, but tbh I really
like it. Too many times a finger slip turns Cmd+W into Cmd+Q and suddenly I'm
lost a whole pile of tab state, possibly even half-submitted forms. Yuck.

~~~
OskarS
Surely that can be restored? In firefox, if you accidentally close a window,
there’s a “Recently closed windows” thing in the history menu you can use that
brings back all the tabs.

~~~
mikepurvis
It can, and things are better than ever as far as preserving (most) form
state, sessions, and scroll positions, but it's not perfect. Just as one
example, pages opened on one network (eg at work) won't be able to reload
elsewhere unless I take the extra step of connecting to the VPN. This is not
the end of the world, but it's an annoying detour when I really just wanted to
close that one tab.

And it's so rare that I shut down the whole browser anyway, so it makes sense
to me to make it hard to do accidentally.

------
gok
Javascript engines gave up on interpreters too quickly. JITs have been a huge
source of security holes, and the language is so huge now that verifying the
correctness of JS optimizations is extremely hard. JS was never meant to be a
high performance language. Plus all the heroic work on exotic optimization has
just resulted in induced demand. Web pages have just grown to contain so much
Javascript that they're even slower than they were when JS was slow.

Browser vendors should agree to make JS slow and safe again like it used to
be, forcing web developers to make their pages smaller and better for users.
For the unusual cases like browser-based games, WebAssembly is ok (it's much
easier to verify the correctness of a WASM compiler), and it should be behind
a dialog box that says something like "This web page would like to use extra
battery power to play a game, is that ok?"

~~~
Quekid5
> JITs have been a huge source of security holes, and the language is so huge
> now that verifying the correctness of JS optimizations is extremely hard.

Do you have numbers to back that up?

There certainly have been 1 or more security holes in JITs, but AFAICT most of
the browser vulnerabilities have more to with bad (new) APIs.

The level where a JIT operates really has nothing to do with the surface
syntax of JS, so adding "syntactic sugar" features to JS should have very
little impact on JITs. (I'm thinking of things like the class syntax, lexical
scope for function literals, etc. Maybe there's a class of additions that I'm
missing.)

~~~
gok
> Do you have numbers to back that up?

Hm hard to come up with a number that shows JS optimizations are hard, but you
can peruse a collection of Javascript engine CVEs:
[https://github.com/tunz/js-vuln-db](https://github.com/tunz/js-vuln-db)

Notice how many are JIT or optimization issues, or are in esoteric features
like async generators or the spread operator.

~~~
Quekid5
That's fair and I now know more. I'm not convinced that this is the biggest
issue with JS engines/browsers, but I certainly have more evidence against me
:).

It's interesting how many of those are labeled OOB. Does that mean that we're
talking JIT flaws that allow OOB access to memory? Is it's actually tricking
the JIT itself into allowing OOB access, or is it actually OOB'ing the JIT?

I wonder what the performance impact of _all_ JIT code being forced to do
bounds-checking would be...

~~~
saagarjha
> Is it's actually tricking the JIT itself into allowing OOB access, or is it
> actually OOB'ing the JIT?

What's the difference between the two? Many JavaScript exploits abuse the
interaction between strange features of the language to get around bounds
checks (often, because a length was checked but invalidated by later
JavaScript executing in an unexpected way, or a bound not forseen as needing a
check) leading to an out-of-bounds. And I'm assuming many of these are heap
corruptions where someone messes with a length that lets them get out-of-
bounds.

~~~
Quekid5
It's not much of a difference in terms of consequences, but always-on bounds
checking could alleviate OOB'ing the JIT at least.

------
gwern
So would this be one of the Futamura projections
[http://blog.sigfpe.com/2009/05/three-projections-of-
doctor-f...](http://blog.sigfpe.com/2009/05/three-projections-of-doctor-
futamura.html) ?

------
sehugg
What's a good way to diagnose optimzation/deoptimization performance issues?
The Z80 emulator I use for
[http://8bitworkshop.com/](http://8bitworkshop.com/) has some long pauses
while it's spinning up. It uses a huge generated switch statement, which I'd
assume is hard to optimize if type info isn't complete. (I'm replacing it with
a simpler emulator which works much better though)

~~~
tasty_freeze
I have written a single javascript program in my life, and it was an emulator
for an 8080-based machine. I used
[https://bluishcoder.co.nz/js8080/](https://bluishcoder.co.nz/js8080/) for
that part of the emulator, though I had to make some changes to it.

I found the emulator ran 4x faster on firefox than on chrome. The culprit was
the main dispatch loop, a 256-entry switch statement. Chrome used a slow fall-
back path because there were too many cases. The fix was to have "if (opcode <
128) { switch for first 128 cases} else { switch for other 128 cases }". It
made FF a little bit slower, but greatly sped up on Chrome.

I also tried generating 256 functions and then dispatch to the right sub based
on an array of function pointers, but it wasn't any faster than the switch
statement.

But that was five year ago, and I'm sure the landscape is different now.

------
opan
I'm seeing a lot of people saying gmail and youtube are slow in Firefox. This
may not sound like a good answer, but consider using a dedicated video player
such as mpv for youtube, and an email client instead of webmail.

Web browsers are some of the worst-performing software we have today. Asking
them to do more than display documents and web pages never seems to go well.

------
Noumenon72
I'm sorry, I do not understand from the article what the Baseline Interpreter
is or does. It keeps the Baseline Compiler from having to compile so many
functions by turning some sections into bytecode first?

~~~
lovasoa
Everything is turned into bytecode anyway. The Baseline Interpreter interprets
the bytecode faster than the C++ interpreter, which allowed them to send less
code to the JIT compiler.

------
FreeHugs
I wouldn't be surprised if improvements in Javascript execution will make
webassembly obsolete.

It already has a slim advantage of being 2x faster.

And Javascript has so many advantages in terms of handling. No compilation
step needed at all. And since it has modules widely supported now, it is a joy
to code in native Javascript without any libraries like React&Co.

Just look at how beautifully you can dynamically load code when it is needed
in modern Javascript:

    
    
        let calendar = await import('/modules/calendar.js');
        calendar.askUserForDay("Checkin Date");

~~~
rocqua
You are possibly right, but it would be sad if we our choices for writing
programs with a UI are: write in javascript or compile to javascript. There
are many languages out there, webASM would allow them to work without the
massive pain of cross-compilation.

I guess I am just an idealist screaming about how packet switched networks are
unreliable and we should all use circuit switched networks.

~~~
FreeHugs
If you want to use a different language, why would you care about the compile
target?

Compiling is done by the compiler. So to the developer it is the same. No
matter if it comiles to Javascript or Webassembly.

In the end, I don't think writing code for the web in languages other then
Javascript will take off. Simply because Javascript will always evolve to fit
this specific environment. And therefore will always be the best choice. While
other languages will evolve to be the best fit for their niche.

~~~
rocqua
Cross compilation always comes at a performance cost. Moreover, it is another
compilation target your compiler needs to support. When that compilation
target is a high-level language, supporting it is harder. This means java-
script as a compilation target is less likely to be added.

------
joaobeno
I love those elemental animal names of their projects...

------
dandigangi
Had no idea their interpreter was C++ under the hood. Excellent article fro
the Mozilla team as always.

------
devwastaken
I can't help but think that regardless what different browser vendors do
there's no competition with v8. Like any language there is a standard library,
or environment around it. V8 is to JavaScript what CPython is to python.

At least in python you can use other versions and it's a very similiar
environment. But if you want to use mozillas spidermonkey without Firefox,
it's hoops and bounds worse experience. I'd argue that v8 is far less of a
lock in by comparison.

Given all that, why are we still creating entirely separate engines that are
made differently, yet do the same.

~~~
the_duke
Diversity and multiple implementations are essential for the web IMO.

V8/Blink/Chromium are not independent community projects, but firmly in the
hands of Google. Chromium being the only viable implementation would put too
much control in the hands of a single company (regardless of which company
that is, Firefox being the only implementation would be just as bad).

It's redundant effort, but it also enforces consensus building and exchange of
ideas.

EG Google would have happily stuck with PNacl, but (afaik) Mozilla pretty much
forced their hand - with the result being Webassembly, a much better design.

~~~
why_only_15
Also a much slower one - on PDFTron's benchmark, WASM is half as fast as PNacl
for me: [https://www.pdftron.com/benchmarks/pnacl-vs-
wasm/](https://www.pdftron.com/benchmarks/pnacl-vs-wasm/).

~~~
the_duke
Their very own blog post goes into plenty of detail about this:
[https://www.pdftron.com/blog/wasm/wasm-vs-
pnacl/](https://www.pdftron.com/blog/wasm/wasm-vs-pnacl/)

It's nothing fundamental about WASM; in fact they state that the actual
computations are slightly faster.

