
A lighter V8 - tosh
https://v8.dev/blog/v8-lite
======
Nitramp
One of the interesting aspects of benchmarks is that they are usually designed
and intended to run in isolation. E.g. if you benchmark a database system, you
expect that to be the sole system running on your server machine, in control
of all resources.

That's not true of software running on desktop systems or mobile phones -
desktops usually run many concurrent tasks, so do phones to some degree, and
then there's also the question of battery use.

That can create skewed incentives if the benchmark isn't carefully designed.
E.g. you can usually make space/time tradeoffs regarding performance, so if
your benchmark is solely measuring CPU time, it pays off to gobble up all
possible RAM for even minor benefits. If your benchmark is only measuring
wallclock time, it pays off to gobble up all the CPUs, even if the actual
speedup from that is minor.

This can lead to software "winning" the benchmark with improvements that are
actually detrimental to the performance on end user's systems.

~~~
robocat
Aside: Chromium includes regression tests for battery usage.

I saw a fix get reverted because the code change caused a 5% step increase in
battery usage.

They have some very good testing infrastructure.

~~~
crispyporkbites
How do they test this?

~~~
judge2020
this commit[0] might be what's referenced, and this[1] looks to be the test
that Google runs (not sure if this is ran in CI, or only done anecdotally for
when performance tests[2] show issues).

0:
[https://github.com/chromium/chromium/commit/208f274e4bcb5174...](https://github.com/chromium/chromium/commit/208f274e4bcb51748bba83071aa70011ceafa385)

1:
[https://chromium.googlesource.com/chromiumos/third_party/aut...](https://chromium.googlesource.com/chromiumos/third_party/autotest/+/refs/heads/master/client/site_tests/power_LoadTest/README.md)

2: [https://chromeperf.appspot.com/](https://chromeperf.appspot.com/)

~~~
saghul
Never heard of [1] before. Impressive stuff.

------
claytongulick
When I saw the title of this article, I got really excited because I thought
they were referring to a lighter "build".

IMHO, one of the biggest problems facing v8 right now is the build process.
You need to download something like 30 gigs of artifacts, and building on
windows is difficult - to say the least.

It's bad enough that the trusted postgres extension plv8 is considering
changing it's name to pljs and switching engines to something like QuickJS.
[0]

One of the driving factors is that building and distributing v8 as a shared
lib as part of a distro is incredibly difficult, and increasing numbers of
distros are dropping it. This has downstream effects for anyone (like plv8)
that are linking to it.[1]

Also, embedding it is super complex. Referenced in the above conversation is a
discussion that NodeJS had to create their own build process for v8. At this
point, it's easier to user the NodeJS build process and use the Node v8 API
than it is to use v8 directly.

At the beginning of the article, they are talking about building a "v8 light"
for embedded application purposes, which was pretty exciting to me, then they
diverged and focused on memory optimization that's useful for all v8. This is
great work, no doubt, but as the most popular and well tested JavaScript
engine, I'd love to see a focus on ease of building and embedding.

0:
[https://github.com/plv8/plv8/issues/364](https://github.com/plv8/plv8/issues/364)

1:
[https://github.com/plv8/plv8/issues/308#issuecomment-4347400...](https://github.com/plv8/plv8/issues/308#issuecomment-434740000)

~~~
vvanders
Meanwhile integrating Lua is as simple as just dropping in a few c files...

~~~
shakna
As is duktape if you for some reason you need it to be JavaScript.

~~~
edoceo
I thought the word "duktape" was typo+snark. It's not. 100% legit, thanks!

[https://github.com/svaarala/duktape](https://github.com/svaarala/duktape)

~~~
shakna
Sorry, I probably should have linked to it.

The embedded API is really easy to make use of, and I'd say it's reached
production-level stability now.

I've only had to use it on tiny hardware though, so your experience may
differ.

------
syspec
The most recent version of chrome has made having a breakpoint a painful
affair.

* The browser becomes completely unresponsive, each time you hit a breakpoint for 3-5 seconds.

* It also takes chrome much longer to show you the source maps for a page

* If you refresh while on a breakpoint it will remain unresponsive for nearly 10 seconds.

Is this related to these new changes? Is there a way to revert the trade off?

~~~
JMTQp8lwXL
> * If you refresh while on a breakpoint it will remain unresponsive for
> nearly 10 seconds.

Glad to hear I'm not alone in experiencing this. For a period, I thought I had
written bad code that caused Chrome to stumble. Guess it's just Chrome.

~~~
natorion
Can you please file a bug report on crbug.com?

~~~
mehrdadn
How are Chrome devs themselves not experiencing this? I've experienced it
too...

~~~
djmips
It's easy to conceptualize a configuration or use case that's common to the
people reporting a bug that's not being used by the developers or in their QA
process. It's not uncommon.

------
ksec
I think Firefox went through something similar many many years ago, a Project
called MemShrink started after many user were complaining how Firefox were
getting slower and bloated with Firefox 3 - 4* if I remember correctly.

It was Memory optimisation in every part of Firefox and mostly in
SpiderMonkey.

~~~
clumsysmurf
I think Firefox needs another MemShrink initiative. After Electrolysis / e10 -
at least in my experience - the browser uses more memory over time than
Chrome.

~~~
aitchnyu
I tolerate (apparently) a 6G memory leak with 50+ tabs and Tree Style Tabs. It
was much faster when Mozilla accidentally remotely disabled extensions for a
weekend.

------
truth_seeker
Great engineering stuff. I am consistently amazed by the work of V8 team.

I hope V8 v7.8 makes it to Node v12 before its LTS release in coming October.

------
pier25
Somewhat off topic but this Lite version made me think of something.

Is there any engine/browser mode/something that lets go of legacy JS/CSS/HTML
and increases performance/weight/memory consumption?

A browser/JS engine with removed legacy support would in principle be much
faster, no?

~~~
untog
Sciter is maybe close:

[https://sciter.com/](https://sciter.com/)

It allows you to make desktop apps with HTML/CSS, but it has its own custom
engine that doesn't support all the quirks the web does.

~~~
t0astbread
This looks interesting but the code samples look like they include non-
standard features (if I'm not mistaken).

Is this compatible out of the box with (most) modern web apps? Because that's
the real value of Electron.

~~~
pier25
> _Is this compatible out of the box with (most) modern web apps? Because that
> 's the real value of Electron._

No it's not. It uses its own JS-like scripting language, and custom CSS 3
support.

A couple of months ago the author suggested on Reddit he might start working
on an Electron alternative that used Node + Sciter.

[https://www.reddit.com/r/programming/comments/a8vkzm/scitern...](https://www.reddit.com/r/programming/comments/a8vkzm/sciternode_as_an_alternative_to_electron/ece54af/)

------
Sawamara
It is rather interesting to me to see that V8 has been packing so many micro-
optimizations and special techniques over the years that now it has become
actually feasible to just start cutting back on the optimizations to have
performance gains.

What all this enables is, beside having more sensible defaults, is the ability
for developers who use V8 with Electron or NW.js to tweak the default behavior
of the engine, catering to their application's needs. That is always good.

~~~
cwp
That's not what I took away from this at all. They specifically say that Lite
mode started out as a way to reduce memory consumption at the cost of
performance. Execution time jumped 120%!

Then they figured out how to get the memory improvements without the
performance hit. The only place where they actually removed optimizations was
in generating stack traces, and that wasn't a gain in performance, it was just
considered acceptable for that to get slower.

------
andrewhodel
Someone needs to add some heavy complexity to ACID4 so there's a solid
baseline for tests.

They should have ACID sub-tests for images, videos, etc so these statistics
would actually provide something long term and important.

------
umvi
Is there any reason there couldn't be a python interpreter as awesome as V8 is
for JavaScript?

~~~
rudi-c
I did some work on a Python JIT in the path. The two biggest challenges were:

\- Python is much, much, more dynamic than Javascript. You can override just
about anything in Python, including the meaning of accessing a property. You
have overloaded operators (with pretty complex resolution rules), metaclasses,
and more. And they're all used extensively. There's some Javascript
equivalents to those things, but either there are fewer deoptimization cases
or are features that aren't commonly used in practice (e.g. Proxy objects).

\- Python has a ton of important libraries implemented as C extensions. These
libraries tend to depend on undefined behavior of the CPython interpreter
(e.g. destruction order which is more deterministic with ref counting) or do
things that happen to work but are clearly not supposed to be done (e.g.
defining a full Python object as a static variable).

I guess also economic incentives, there hasn't been an incentive for anybody
to staff a 50 person project to build a Python JIT given that it's cheaper to
rewrite some or all of the application in C/C++/Rust/Go whereas that's not an
option in Javascriptland.

~~~
ufo
I completely agree with you and I'd argue that #2 and #3 are the two biggest
reasons.

It is easy to forget the colossal amount of engineering resources that browser
vendors have spent creating and rewriting their Javascript engines. And due to
the nature of how their JITs work, all that work is tied down to the specific
Javascript environment they were written for. (For example, you can't really
reuse the v8 codebase to create a Python JIT)

And in Python's case a lot of the appeal of the language rests on the
extensive library ecosystem, which has a significant number of extensions
written in C. Generally speaking JIT compilers aren't very good at optimizing
code that spends a lot of time inside or interacting with C extensions, even
if we ignore the significant issues you mentioned regarding undefined
behavior.

~~~
The_rationalist
Why is javascript slower than Java despite being both garbage collected and v8
having more engeeners than e.g openjdk?

~~~
setr
A couple obvious reasons would be compiled vs interpreted, and static vs
dynamic, both of which bear some inherit runtime performance cost.

~~~
jashmatthews
Compiled vs interpreted is not a useful distinction. Until fairly recently
when Ignition was introduced, V8 had no interpreter. All the JS was compiled
by the first tier “full codegen”

The bigger difference is that the JVM is heavily optimized for performance
after a long warmup and V8 needs to produce relatively fast code early during
page loading.

Java being much more static certainly helps warmup time but ultimately doesn’t
really affect final performance. LuaJIT can beat C in some cases once it has
time to compile all traces needed.

~~~
The_rationalist
_V8 needs to produce relatively fast code early during page loading_ So for
e.g backend js programs is it possible to ask v8 to take more time to
optimize?

~~~
saagarjha
Why would would you want it to take longer to optimize?

~~~
The_rationalist
To get more time to generate betten codegen so a faster program (but slower
"launch time")

~~~
saagarjha
Most VMs will recompile long-running hot code; I’d assume V8 does this as
well.

------
pizlonator
This is some cool shit!

I don’t know how V8 and JSC compare on memory but I’m happy for this to become
a battleground. Nothing but goodness for users if that happens.

(Worth noting that JSC has had a “mini mode” for a while, but it’s focused on
API clients. And I don’t know if it’s as aggressive as what V8 did.)

~~~
bzbarsky
I can't speak to JSC, but at least SpiderMonkey has had memory optimizations
that are equivalent to the ones described here (e.g. discarding cold-function
bytecode) for a while... I agree that it would be interesting to have more
competition, including more measurement, in this space.

~~~
pizlonator
I would love a memory usage benchmark battle. :-)

JS is way faster than it was before the JS perf wars. Let's do it again. Then
lets fight over power!

~~~
rasz
Old Carakan (Opera Presto) would win hands down. That browser was using ~1GB
with 500 heavy active tabs loaded.

------
nammi
Somewhat off-topic, but is there an RSS feed available for the V8 blog? Their
posts are always interesting to me, but I've searched a few times with no
luck.

~~~
toupeira
Here you go: [https://v8.dev/blog.atom](https://v8.dev/blog.atom)

(found through the <link> tag on the blog)

------
non-entity
I wonder if something like this could be used to build an electron alternative
(or modify the existing electron backend to use this engine), since memory
usage is a major complaint for these applications

~~~
giancarlostoro
Well... Electron is powered by Chromium, but uses NodeJS which is based off of
V8, V8 is also the default JS engine for Chromium. So in theory, it would
benefit Electron directly, not sure why anybody would waste the engineering
efforts to recreate Electron if these changes will find their way on Electron.

Edit: Didn't mean to make it sound personal on my second paragraph, but the
rest of what I wrote still applies with the context of the article and the
comment made.

~~~
non-entity
Dear lord people are getting pointlessly defensive and aggressive. Obviously
this article is not referencing the full V8 engine, but a light version that
may or may not make it into Chromium

You might want to re-read the HN guidelines

> When disagreeing, please reply to the argument instead of calling names.
> "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not
> 3."

~~~
dang
Ok, but please don't respond to a bad comment by breaking the guidelines
yourself, as in the first sentence above. I realize it's hard to do when
someone has replied to you with a provocative swipe, but it's even more
necessary in such cases.

------
AriaMinaei
> If you prefer watching a presentation over reading articles, then enjoy the
> video below! If not, skip the video and read on.

This is OT, but I think we can design a format with the best of both worlds.
We can have the personal and narrated quality of videos/voiceovers, along with
the skimmable/scannable/interactive quality of web content.

~~~
DonHopkins
I agree, having a transcript of a video is very useful. I've done that with
some of my own and other people's videos.

It takes a lot less time to skim over an illustrated transcript than to watch
a video, and it lets the readers decide if they're interested enough in
actually taking the time to watch the video. Plus it's search engine friendly,
and lets you add more links and additional material.

I loved the body language in this classic Steve Jobs video so much that I was
compelled to write a transcript with screen snapshots focusing on and
transcribing all of his gestures (in parens). After reading the transcript,
it's still interesting to watch the video, after you know what body language
to look for!

“Focusing is about saying no.” -Steve Jobs, WWDC ‘97 As sad as it was, Steve
Jobs was right to “put a bullet in OpenDoc’s head”. Jobs explained (and
performed) his side of the story in this fascinating and classic WWDC’97
video: “Focusing is about saying no.”

[https://medium.com/@donhopkins/focusing-is-about-saying-
no-s...](https://medium.com/@donhopkins/focusing-is-about-saying-no-steve-
jobs-wwdc-97-ff0174c171d0)

------
userbinator
This is a little off-topic, but I find the terminology used in software these
days to be a little perplexing. Is "small", a perfectly adequate word to
describe less memory usage, not buzzwordy/trendy enough?

"light" or "heavy" just reminds me of that classic story about the weight of
software.

~~~
comex
It’s been common for a long time to refer to software as “lightweight”, though
maybe not specifically “light”.

If the title said “A smaller V8”, I’d probably assume it referred to the size
of the binary on disk,

------
z3t4
Why not allow the runtime to call the garbage collector (GC)? The GC is very
lazy by default, it should be possible to make it collect _all_ garbage.
Currently v8 will just let the garbage grow because the GC is so lazy.

------
magicalist
Probably should be re-titled to match the post as people are getting confused
by the reference to V8 lite, which this post isn't directly about.

> _However, in the process of this work, we realized that many of the memory
> optimizations we had made for this Lite mode could be brought over to
> regular V8 thereby benefiting all users of V8._

> _...we could achieve most of the memory savings of Lite mode with none of
> the performance impact by making V8 lazier._

~~~
dang
Yes, the submitted title ("V8 lite (22% memory savings)") broke the site
guideline which asks: " _Please use the original title, unless it is
misleading or linkbait; don 't editorialize._"

Doing this tends to skew discussions enormously, so please follow the
guidelines!

------
perl4ever
I hoped this would be about this:

[https://grassrootsmotorsports.com/forum/grm/lightweight-v8-e...](https://grassrootsmotorsports.com/forum/grm/lightweight-v8-engines/153797/page1/)

------
abc_lisper
This is great, but because you have achieved memory reduction by trading off
speed, it would be nice to see charts that show processing time increases too.

~~~
truth_seeker
> Lite mode launched in V8 version 7.3 and provides a 22% reduction in typical
> web page heap size compared to V8 version 7.1 by disabling code
> optimization, not allocating feedback vectors and performed aging of seldom
> executed bytecode (described below). This is a nice result for those
> applications that explicitly want to trade off performance for better memory
> usage. However in the process of doing this work we realized that we could
> achieve most of the memory savings of Lite mode with none of the performance
> impact by making V8 lazier.

Copied from article

~~~
bt848
I don’t know about “none”. Updating the age of the compiled representation of
a function on every function entrance doesn’t strike me as cost-free. At a
minimum that’s a store.

~~~
Leszek
Bytecode aging actually existed already for other reasons (expiring code
caches).

