
Chrome 59 with TurboFan is Sometimes Slower than 58 - jotto
https://www.prerender.cloud/blog/2017/06/22/chrome-59-is-sometimes-slower
======
chickenbane
This page does not show Chrome being 59 being slower. Rather, it shows a
contrived bench running 38% slower. Interestingly, it also shows their site
running 32% faster.

Given the V8 team has explicitly stated the plan to optimize for real websites
rather than synthetic benchmarks [1] these results look completely appropriate
and desirable. Good work V8 team!

[https://blog.chromium.org/2017/04/real-world-javascript-
perf...](https://blog.chromium.org/2017/04/real-world-javascript-
performance.html)

~~~
randyrand
Perhaps the title changed? It now says chrome is _sometimes_ slower.

~~~
ucaetano
Yep, _sometimes_ , when you're running a synthetic benchmark that doesn't
represent real world usage, it is slower.

My Ferrari is also _sometimes_ slower than my VW Beetle, when driving on a
uneven dirt road .

------
fntd
So it is slower in a benchmark that doesn't represent any real world problems,
while an actual app performs much better?

Isn't that exactly what they were going for?

~~~
gsnedders
Yes. It was a very deliberate, conscious decision to _not_ care about the
benchmarks, marketing be damned; some of that was a belief the V8 had
historically become overfit to them.

~~~
remus
I don't think it's so much as a complete abandonment of benchmarks as a de-
emphasis on them. I'm sure the chrome team are still running plenty of
benchmarks, but I'd guess they're more tuned towards 'real world' usage.

~~~
forgot-my-pw
I believe they now benchmark by running popular websites/webapps instead of
single tests in loops.

------
StillBored
The problem with javascript JIT's is that benchmarks which run for tens of a
second in a tiny piece of code are not reflective of general usage patterns.
So the quality of the initial pass (baseline in Firefox) is likely more
important. Worse, given fairly dynamic code paths, the system can get into a
situations where it is cycling between optimization targets and spends a lot
of time running sub-optimal code and entering/exiting the JITed code, or the
JIT/compiler itself.

Anyway, there are so many engineering trade offs its like a big water balloon,
squeeze it here and it gets faster in this spot, but slows down somewhere
else. When, what you really need is a way to let some of the water out of the
balloon.

~~~
fulafel
A bigger problem is the variability and pace of change combined with poor and
level inspectability of generated code. This makes it fragile and laborious to
design primitives that compile to efficient code. Contrast to eg. JVM.

~~~
jopsen
webassembly is the answer to the few high-performance bottlenecks.

~~~
fulafel
WebAssembly is a poor and high effort compile target for languages like ES6,
ClojureScript, Elm, etc.

~~~
espadrine
WebAssembly is _not_ designed to be a compile target for ES6, ClojureScript,
etc. At least not yet. Its primary target currently is languages like C, C++…
low-level compiled languages that don't have a garbage collector.

(Source:
[https://github.com/WebAssembly/design/blob/master/GC.md](https://github.com/WebAssembly/design/blob/master/GC.md))

~~~
fulafel
That's what I tried to say, wasm does not solve the problem I described.

------
romanovcode
Chrome is faster on a real website, however slower on my Hello World React app
with 1000 hello-world components.

Literally unbrowsable.

------
artursapek
I wonder why these guys are using Chrome for this? I've had no problem pre-
rendering React using a simple node process and
ReactDOMServer.renderToString();

~~~
jotto
Because rendering in a Node process is more complicated when async state is
involved - using a browser eliminates that complexity. For simpler apps
without any async state (AJAX or WebSockets), a Node process may suffice.

~~~
mcintyre1994
Dumb question because I have very little node experience - I've been doing a
node course and it uses a lot of async stuff server side, eg for everything
that hits the database (using promises). Why wouldn't you just do that for
your async state when you render server side?

~~~
jotto
You could indeed do that, it's just more work. The beauty of server-side
rendering a React app in a headless browser is that since the environment is
the same as what the app was originally designed for (a browser), things _just
work_.

For example: you load the app in your headless browser and let it load as
would in any browser. The browser fires some events and XHR/WebSocket requests
and you can cleanly wait for them to finish as opposed to doing something like
this: [https://techbrunch.gousto.co.uk/2016/10/10/isomorphic-
react-...](https://techbrunch.gousto.co.uk/2016/10/10/isomorphic-react-v2/)
which requires you to do some extra configuration in your app to help support
a Node environment.

TL;DR: ReactDOMServer.renderToString won't wait for your async requests to
finish, you need them to execute and finish before you call renderToString

~~~
underwater
This sounds like a major kludge to me. What if the page keeps firing events?
What if the page is relying on cues from the browser (dimensions, cookies, JS
capabilities, etc.) to know the correct requests to make.

If you care about performance you should design an architecture where it's
possible to determine exactly which parts should and can be executed on the
server.

~~~
artursapek
Agreed. A react component that can't render _something_ without extra async
state updates is a poorly designed react component.

------
dacohenii
Here's an interesting talk about optimization and JIT compilation in V8:
[https://www.youtube.com/watch?v=p-iiEDtpy6I](https://www.youtube.com/watch?v=p-iiEDtpy6I)

~~~
forgot-my-pw
Also this one from the last IO:
[https://www.youtube.com/watch?v=N1swY14jiKc](https://www.youtube.com/watch?v=N1swY14jiKc)

------
neoeldex
I've also noticed the chrome devtools using 100% CPU quite often, since I
updated to v59. Pretty annoying since the ff devtools aren't (imho) up to par
with chrome's

------
homakov
If you can make it eat less memory, go make it 2x slower im fine with it.

~~~
PetahNZ
Why? memory is cheap.

~~~
nirvdrum
This really hasn't been true on consumer devices for a while now. Sure, you
can add RAM to a server for cheap, but no one really runs a browser there. In
many cases now, adding more memory to a device means buying a completely new
device. Unfortunately, it doesn't matter much if the component is cheap if
there's no way to actually install it.

~~~
RussianCow
I think the parent's comment still stands if you're comparing the cost of
memory relative to processing power. Most devices have more memory than most
people need, but there is never enough processing power. So, given the
tradeoff of having Chrome (or whatever other program) use more memory in
exchange for speed, I would take it.

~~~
Sylos
Due to the way RAM works, you also always have a use for more RAM. Chrome is
slowing down other programs on your system by using more RAM, so it's not
using more memory in exchange for speed, it's sacrificing the speed of the
rest of your system to speed itself up.

~~~
johncolanduoni
Just because your disk cache is expanding to fill your available RAM (which
doesn't even always happen depending on the kernel/RAM size) doesn't mean that
extra cache buys you anything. At some point it's just going to be holding
stuff that isn't used before it's evicted.

~~~
spacehunt
Disk cache? Nowadays Chrome is eating so much RAM that it's pushing other
programs into swap.

