
JIT-Less V8 - bpasero
https://v8.dev/blog/jitless
======
cromwellian
For very security sensitive embedded applications, this could be a huge boon,
since it reduces attack area surface, both from the point of view of
executable pages, to the simplicity of the interpreter vs full JIT. Granted,
there are many JS interpreters already available, like Ducktape, that fulfill
the same benefits, but the immediate upside of this is compatibility with the
full Node/ES6+ ecosystem and Chrome Dev Tools.

I have to say, Ducktape looks like it might have superior resource usage for
very low memory situations.

~~~
xuejie
Definitely agree duktape is a decent attempt at low memory situation, but one
additional point is that duktape is very slow compared to modern JavaScript
engines.

Hence I wonder if we can split ignition off v8 to create a standalone fast
JavaScript interpreter at the cost of (possibly) more memory consumptions than
duktape, that could prove to be useful in many scenarios.

~~~
Leszek
Ignition is _very_ tightly coupled to the rest of V8, starting with that fact
that it uses inline caches and the object model to maintain performance, and
finishing with it itself being written in "CSA", which is an assembler DSL
that is passed through the TurboFan (optimizing compiler) backend to generate
the machine code for the bytecode handlers (this has the interesting side-
effect that porting V8 to a new platform requires porting the optimizing
compiler). There's not really much that can be split off.

~~~
xuejie
Thanks for the explanation! One follow-up question is: how can we still call
ignition an interpreter with this flow where TurboFan is still used to
generate machine code? Doesn't that defunct the idea of an interpreter in v8,
which is to be used in platforms w/o write access to executable memory, such
as iOS or PS4?

~~~
schuay
While bytecode handlers (and other builtins) _are_ generated by TurboFan, this
happens at V8-compile-time, not at runtime. Their generated code is shipped
embedded into the binary as embedded builtins.

~~~
c256
This suggests that a specialized app (such as a set-top box, smart TV, or game
console) could push more code through the pre-JIT process to further close the
performance gap. (This is interesting to me, because I haven’t seen much
interest in pre-JIT compilation since the early days of Java, HotSpot, etc.)

------
brlewis
The information I was looking for, what kind of interpreter they mean, was in
an Ignition article linked from the main article:
[https://v8.dev/blog/ignition-interpreter](https://v8.dev/blog/ignition-
interpreter)

"With Ignition, V8 compiles JavaScript functions to a concise bytecode, which
is between 50% to 25% the size of the equivalent baseline machine code. This
bytecode is then executed by a high-performance interpreter which yields
execution speeds on real-world websites close to those of code generated by
V8’s existing baseline compiler."

~~~
Leszek
Note that the "existing" baseline compiler is now removed, and entirely
replaced by the interpreter.

------
irrational
Apple requires browsers on iOS to use the same rendering engine as Safari, but
would it allow a browser to use a different JavaScript engine? Or is this a
loophole that Apple didn't forsee that they will be closing in the future? Are
the rendering engine and JavaScript engine so separated that you could use the
Safari rendering engine, but a different JS engine?

~~~
est31
There's a software-level rule that forbids JIT engines by never allowing apps
to mark pages as executable. According to this rule, you could run Chrome on
iOS now. But there's also a policy level rule:

> 4.7 HTML5 Games, Bots, etc.

> Apps may contain or run code that is not embedded in the binary (e.g.
> HTML5-based games, bots, etc.), as long as [...] the software [...] only
> uses capabilities available in a standard WebKit view (e.g. it must open and
> run natively in Safari without modifications or additional software); your
> app must use WebKit and JavaScript Core to run third party software and
> should not attempt to extend or expose native platform APIs to third party
> software

[https://developer.apple.com/app-
store/review/guidelines/](https://developer.apple.com/app-
store/review/guidelines/)

~~~
0815test
You could use the XCode linker to embed your JS/WASM code in the binary as a
read-only resource/section, and then run it using JIT-less V8. Interpreted
code is generally more compact than a native ISA like ARM64, so this could be
useful in order to reduce app size.

~~~
saagarjha
> You could use the XCode linker to embed your JS/WASM code in the binary as a
> read-only resource/section, and then run it using JIT-less V8.

Or just read it from a file?

~~~
0815test
It needs to be embedded in the binary, because code that is _not_ so embedded
has different rules applied to it. So, it has to be done either in the
compiler (e.g. as compile-time const arrays, which will then be part of the
rodata section) or in the linker; the latter is arguably a bit easier.

~~~
xsmasher
"The binary" in this context means the ipa (archive) that you send to Apple.

They are making a distinction between downloaded data and data included in the
archive, but using sloppy terminology.

------
hajile
What about Proper Tail Calls?

Duktape and XS support them. JSC has had them for years now too. That's a big
feature to accidentally miss. You switch and then inadvertently blow your
stack every now and then because v8 decided to remove their already
implemented tail calls for no good reason (Lest we go down the road again, the
"alternative syntax" proposal was dropped, so there's zero excuses aside from
a deliberate violation of the spec).

------
samcday
For a team that has been pushing the cutting edge of Javascript VM performance
for many years, it must feel pretty weird to ship a feature that allows one to
willingly regress performance so much!

~~~
schuay
A fast V8 with jitting isn't going away. Jitless V8 is meant for embedders
that either cannot or do not want to allocate executable memory at runtime.

(Also, in many common real-world workloads the performance regression is
minimal.)

~~~
samcday
I was worried my original comment might be misunderstood, that's why I
qualified it as "... allows one to >> _willingly_ << regress performance...".
Looks like it still got misunderstood anyway ;)

------
bastawhiz
I'm curious if the runtime flag to enable JITless mode could also be enabled
at compile time, removing the JIT compiler from the binary entirely. That
could be really useful for projects where memory comes at a premium (and
performance is not a major concern), like micropython but for JavaScript.

I assume this also doesn't support WASM when the JIT is disabled (or rather,
when you can't write to executable memory), but if it did it could be a neat
way to write decently performant software for tiny systems with just some
JavaScript "glue".

~~~
schuay
> I'm curious if the runtime flag to enable JITless mode could also be enabled
> at compile time, removing the JIT compiler from the binary entirely. That
> could be really useful for projects where memory comes at a premium (and
> performance is not a major concern), like micropython but for JavaScript.

Theoretically yes, but this is not implemented. It should not be too hard to
drastically reduce binary size with a build-time flag.

> I assume this also doesn't support WASM when the JIT is disabled (or rather,
> when you can't write to executable memory), but if it did it could be a neat
> way to write decently performant software for tiny systems with just some
> JavaScript "glue".

Correct, wasm is currently unsupported. Interpreted wasm is possible in the
future, but would likely be very slow.

~~~
bodyfour
> It should not be too hard to drastically reduce binary size with a build-
> time flag.

And then how portable would the code be? Would this be a path to running node
on CPUs without JIT support? Or does it still have to mess with the calling
convention at an assembly level?

~~~
amaranth
According to another comment in this thread [1] the interpreter is actually
generated by the JIT at compile time so no, this wouldn't let you run V8 on a
CPU that isn't currently supported.

[1]
[https://news.ycombinator.com/item?id=19379305](https://news.ycombinator.com/item?id=19379305)

------
gok
JIT-less should really be the default on the web. The security implications of
RWX memory are just so bad, and the amount of time that an exotic JIT
meaningfully improves behavior of real world web browsing (as opposed to
JavaScript benchmarks) is limited. For the rare web app where a JIT is
critical, a simple "Do you really trust this web page to perform a lot of
computation?" dialog would mitigate a lot of zero-click/one-click attacks.

~~~
hashseed
V8 already employs W^X, i.e. memory pages allocated for V8's heap are either
writable or executable, but not both at the same time.

~~~
MikeHolman
By allowing JIT at all, a small ROP chain can call VirtualProtect to make a
larger payload executable.

Sure you can do everything with ROP, but it is less convenient (and Intel CET
might eventually make ROP attacks actually hard).

------
danbolt
As mentioned in the article, this is interesting for game developers that
aren't allowed to run unsigned code (eg: JIT).

JavaScript is very popular the programming zietgiest and likely to be a
language non-programmers are exposed to via the web. Part of me wonders if
game engines would take to integrating it instead of Lua if designers might be
more familiar with it.

------
writepub
THIS is another reason to complain to EU regulators [1], regarding Apple's
unfair trade practices. Never before in the history of computing, has a
company so blatantly suppressed the competition and gotten away with murder.
V8 should not be the one needing re-architecture to meet anti-competitive iOS
App Store rules, the rules need to make common sense, and treat the
competition fairly.

[1]: [https://techcrunch.com/2019/03/13/spotify-files-a-
complaint-...](https://techcrunch.com/2019/03/13/spotify-files-a-complaint-
against-apple-with-the-european-commission-over-apple-tax-and-restrictive-
rules)

~~~
roryokane
Wrong thread – this should have been posted on
[https://news.ycombinator.com/item?id=19377322](https://news.ycombinator.com/item?id=19377322).

------
piannucci
How bad of an idea would it be to create a ROP-based JIT engine for these
platforms? You could hand-craft the gadgets and use the stack to reduce
interpreter dispatch overhead.

~~~
Scaevolus
That's called "Subroutine threading"! :-)
[https://en.wikipedia.org/wiki/Threaded_code#Subroutine_threa...](https://en.wikipedia.org/wiki/Threaded_code#Subroutine_threading)

V8's Ignition interpreter is implemented with "Direct threading", which is
quite similar but (probably?) faster on modern processors-- it does an
indirect jump to the next bytecode handler instead of a return:
[https://news.ycombinator.com/item?id=10034167](https://news.ycombinator.com/item?id=10034167)

"The bytecode handlers are not intended to be called directly, instead each
bytecode handler dispatches to the next bytecode. Bytecode dispatch is
implemented as a tail call operation in TurboFan. The interpreter loads the
next bytecode, indexes into the dispatch table to get the code object of the
target bytecode handler, and then tail calls the code object to dispatch to
the next bytecode handler."

------
ape4
"V8 is Google’s open source high-performance JavaScript and WebAssembly
engine, written in C++." from its home page. I didn't immediately know

~~~
michaelcampbell
And?

~~~
CGamesPlay
A common (and, in my opinion, legitimate) complaint about blog posts that
appear here often is that the company publishing the post doesn't say what it
is they are or are doing, and as a result, why anyone should care about the
about the blog post.

------
tuxxy
Wow, this is pretty neat.

We might be able to finally run more safe cryptography in the browser with
constant-time guarantees (there are other concerns with browser-based crypto
though).

------
la_fayette
This means react native can use V8 on iOS? Does this have any consequences on
performance or similar?

~~~
sercand
I think JavascriptCore is more performant than V8. So, it is not necessary.

~~~
schuay
That is not yet clear. Early adopters have reported (jitless) V8 to be at
least as fast as (jitless) JSC on Octane2 on a native iOS device.

------
childintime
I'd like to try this on the desktop. The difference in memory usage is
probably an order of magnitude. That could make my 2GB laptop usable again. I
doubt I'll see the difference on any site I care about (no facebook for
example). I remember when Java could make the browser totally unusable for
several minutes. An interpreter would have avoided that.

~~~
mbel
From the article:

> Memory consumption only changed slightly, with a median of 1.7% decrease of
> V8’s heap size for loading a representative set of websites.

What makes you believe is should be anything significant? After all the JIT-
compiled code cannot be that large.

~~~
olliej
JIT code for JS is /huge/ at the lower optimization levels, dramatically
larger than an interpreter's byte code by something in the order of 10x - many
megs of code are generated by relatively small amounts of JS code.

------
andy_ppp
Does this avoid some of the issues with Spectre et al?

~~~
olliej
nope - unrelated security concerns.

------
mark-r
Just another indication of Google trying to take over the world. There's a
class of machines where Chrome can't run? We need to fix that, stat!

~~~
mark-r
The tone may be flippant, but I was serious. Google's M.O. is to expand their
reach into as many corners of our lives as possible. Having devices that can't
run Javascript on Chrome is an impediment to that goal, and so I'm sure the
marching orders were to find a way to make it work. It is already acknowledged
that some upcoming work will be done to improve areas that are still too slow.

------
bjourne
I'm skeptical of the claim of improved security. Theoretically, if there were
some horrible bugs in the JIT, one could craft malicious input data causing
the JIT to insert arbitrary code in the code heap. In practice, it doesn't
seem possible. At least HotSpot has been JIT:ing code for decades and no one
has been able to find such an exploit.

~~~
saagarjha
> Theoretically, if there were some horrible bugs in the JIT, one could craft
> malicious input data causing the JIT to insert arbitrary code in the code
> heap. In practice, it doesn't seem possible.

This is very possible: just do a search for ${your favorite JIT} arbitrary
code execution, and you'll almost certainly see a real-world vulnerability.

> At least HotSpot has been JIT:ing code for decades and no one has been able
> to find such an exploit.

Yeah, no. See for example
[https://www.syscan360.org/slides/2013_EN_ExploitYourJavaNati...](https://www.syscan360.org/slides/2013_EN_ExploitYourJavaNativeVulnerabilitiesOnWin7JRE7InOneMinute_YukiChen.pdf)

~~~
bjourne
But the feature the exploit takes advantage of isn't _just in time_
compilation, it is compilation! An _ahead of time_ java compiler would have
suffered from the exact same problem. In fact, any language compiling to
machine code would be just as vulnerable.

~~~
pitaj
Yes, but when pre-compiling, you implicitly trust the code. JITs like V8 are
used to execute arbitrary code on your device, where such an exploit is much
more harmful.

~~~
bjourne
Untrue. Dart code for example is AOT compiled but untrusted. Various
Javascript implementations are also AOT but also supposed to be used for
untrusted code.

~~~
pitaj
I have no idea what you're talking about. Under what circumstances is Dart AOT
compiled and run untrusted? No browsers support Dart as a first-class citizen.
If you're talking about compiling Dart into JS, that's obviously not what
anyone is talking about.

There are no Ecmascript AOT compilers. By definition, Ecmascript must be run
with an interpreter. AFAIK with the dynamic complexity of the language it's
impossible to AOT compile even without things like `eval` and `new Function`.

A better example would be NaCl which as I understand it runs native machine
code in a sandbox.

~~~
bjourne
What I'm talking about? "Yes, but when pre-compiling, you implicitly trust the
code." Citation needed. It's not true at all.

~~~
geofft
Can you give one example of a case where code is compiled AOT and not trusted
during compile time? (Your example of Dart was challenged, and I agree that it
is not an example, so a more detailed explanation of why it is an example
would count.)

~~~
bjourne
Why on earth wouldn't Dart count? It is AOT-compiled and meant to be run
_untrusted_ inside a Dart VM inside a web browser. That was the intention of
the project even if it was cancelled and the VM deprecated. For more examples,
see ActionScript on iOS, TFA itself or any of the myriad of projects trying ti
AOT-compile JavaScript. For example
[https://link.springer.com/article/10.1134/S036176881701008X](https://link.springer.com/article/10.1134/S036176881701008X)

------
mises
I keep wondering what the deal is with JS on the backend? Why does everyone
love it so much? Let's not forget the 10-day design, dynamic typing (and
weakly typed, compared with python), slowness, null vs. undefined, etc. JS is
a scripting language; they're supposed to be for controlling the behavior of
applications (i.e. browsers), not writing applications in and of themselves.
Not to mention the dependency hell, NPM insecurity, etc. I see the purpose for
limited use in websites, but definitely not the PWA or backend stuff.

Can someone who uses it happily on the backend talk about why they like it and
why it's good?

~~~
efdee
It sounds like you read a lot of articles about why JS is bad, but don't have
a lot of experience with it.

* 10-day design? Nobody is using Javascript 1.0 anymore.

* Typing is available to various extents thanks to Flow and/or Typescript.

* Slowness... I don't know what you're referring to. Javascript isn't slow.

* null vs undefined. What about them? They are two different things with different meanings.

* Dependency hell. I assume you refer to the many small modules on NPM with dependencies on other modules. Not sure what the problem here is per se. Avoid them if you don't like dependencies.

* NPM insecurity - what?

I like JS on the backend because it's a nice, flexible language to work with,
with a healthy and cheap (cost-wise) ecosystem. I get a lot of stuff done very
quickly, and I can run my stuff pretty much everywhere.

~~~
whatshisface
> _NPM insecurity_

NPM packages can contain malicious code. There's no NPM review process, and
you can't point to specific versions to lock in your own reviews (package
administrators can change whatever files they'd like). There's no such thing
as a verified-safe dependencies list because the file you reviewed last month
might not be downloaded today.

~~~
efdee
You definitely can point to specific versions.

That being said, most package managers can contain malicious code and very few
of them actually review their packages.

Besides, any company using NPM seriously probably has their own proxy in front
of it, so there's no case of "the file might not be downloadable anymore" if
that was already a problem with NPM itself.

