
React Performance - teamhappy
https://aerotwist.com/blog/react-plus-performance-equals-what/
======
davedx
I think this is a contrived, or over-simplified, example.

The DOM manipulation is fast _in this case_ because it's a simple appendChild
every time. In other cases like where elements in the middle of a table are
updated, you would get into a mess writing vanilla code, either complexity or
performance wise, because you'd have to traverse the DOM to get to where you
need to do updates and do each update individually. React batches such things
together, and does one single update.

Show me a benchmark of an actual real app written in vanilla JS and React. I
suspect the DOM manipulation time would be way higher.

~~~
acdha
It's not just simple appendChild calls. I actually worked on an app which
updated a large table – displaying file metadata, checksums calculated in web
workers, etc. for a delivery – and found React to be around 40+ times slower
than using the DOM[1] or even simply using innerHTML, getting worse as the
number of records increased.

The main trap you're falling prey to is the magical thinking which is sadly
prevalent about the virtual DOM and batching. Basic application of Amdahl's
law tells us that the only way the React approach can be faster is if the
overhead of the virtual DOM and framework code is balanced out by being able
to do less work. That's true if you're comparing to, say, a primitive
JavaScript framework which performs many unnecessary updates (e.g. re-
rendering the entire table every time something changes) or if the React
abstractions allow you to make game-changing optimizations which would be too
hard for you to make in regular code.

Since you mentioned batching, here's a simple example: it's extremely hard to
find a case where a single update will be faster because the combined time to
execute a JS framework and make an update is always going to be greater than
simply making the update directly. If, however, you're making multiple updates
it's easy to hit pathologically bad performance due to layout thrashing[2]
when the code performing an update reads something from the DOM which was
invalidated by an earlier update, requiring the browser to repeatedly
recalculate the layout.

That can be avoided in pure JavaScript by carefully structuring the
application to avoid that write-read-write cycle or by using a minimalist
library like Wilson Page's fastdom[3]. This is quite efficient but can be
harder to manage in a large application and that's where React can help by
making that kind of structure easier to code. If you are looking for a
benchmark where React will perform well, that's the area I'd focus on and do
by looking at both the total amount of code and the degree to which
performance optimizations interfere with clean separation, testability, etc.

EDIT: just to be clear, I'm not saying that it's wrong to use React but that
the reasons you do so are the same as why we're not writing desktop apps
entirely in assembly: it takes less time to build richer, more maintainable
apps. The majority of web apps are not going to be limited by how quickly any
framework can update the DOM.

1\. I partially reduced that to a smaller testcase in
[https://gist.github.com/acdha/092c6d79f9ebb888496c](https://gist.github.com/acdha/092c6d79f9ebb888496c)
which could use more work. For simple testing that was using JSX inline but
the actual real application used a separate JSX file compiled following normal
React practice.

2\. See e.g. [http://wilsonpage.co.uk/preventing-layout-
thrashing/](http://wilsonpage.co.uk/preventing-layout-thrashing/)

3\.
[https://github.com/wilsonpage/fastdom](https://github.com/wilsonpage/fastdom)

~~~
hajile
React allows you to optimize as much as you want while still keeping the
component nature. In this case, you realize you need infinite appends.

You make a component that puts a reference div into the DOM. Next you override
the default shouldComponentUpdate so when you get new data, you create the raw
DOM elements using a document fragment and
this.refs['elem'].appendChild(newDom) -- 0.14 syntax.

Premature optimization is usually bad. React allows you to write your app and
then go as deep as necessary when optimizing later. The fact that you can do
this while still keeping within react is a testament to the power of the
framework/library.

The fact that a Google employee who pushed web components has a problem with a
framework he doesn't know in a case that should usually be avoided without the
optimizations that are possible says more about him than the framework he is
criticizing.

~~~
acdha
> The fact that a Google employee who pushed web components has a problem with
> a framework he doesn't know in a case that should usually be avoided without
> the optimizations that are possible says more about him than the framework
> he is criticizing.

Or possibly that you haven't paid enough attention to what he wrote. He was
very clear to mention that React's productivity wins are significant but
wanted to make a point about how important it is to regularly test performance
rather than just assuming hype is universally true – or, conversely, that
people talking about native browser performance have done the broad, valid
benchmarking needed to support sweeping general claims.

As example of the difference, you're reacting defensively trying to downplay
real concerns which are easily encountered on any large project and attack the
source rather than engage with the actual demonstrated problem. That might
feel good but unfortunately problems aren't fixed by pointing out that the
reporter works for what you perceive as The Other Team.

I'm glad to see actual React developers are responding differently by trying
to improve performance on weak points:

[https://twitter.com/sebmarkbage/status/616950267511070721](https://twitter.com/sebmarkbage/status/616950267511070721)

This is clearly the effect Paul Lewis wanted from his post and it's the one
which improves things for everyone who uses React.

------
nemanja
This smells like a shill piece by a Google developer advocate [1] with a horse
in the race [2].

The key benefit of React is an extremely low cognitive load. There are only
three simple concepts to grok (props, state and lifecycle) to get productive.
The code is very easy to reason about (components are essentially pure
functions of props and state) and debugging is much simpler than with vanilla
JS or jQuery on a project of any meaningful size.

With respect to performance, React shines when it comes to DOM mutations (e.g.
removing a div from DOM, creating a new div, inserting new div into the DOM)
which is what you generally encounter with the real-world load. Here is a demo
illustrating such load [3]. Benchmark offered by OP is amazingly contrived
(actually it feels designed to show React in a bad light and lack of full
source code is very telling). I struggle to think of a real-world scenario of
append-only page with 1000+ images in the DOM, there is simply no valid reason
to do that. React in turn makes it really easy and fast to implement infinite
scrolling (similar to UITableView) and there are a couple of good open source
components that address that.

[1] See the bottom of [https://aerotwist.com/](https://aerotwist.com/)

[2] [https://aerotwist.com/blog/polymer-for-the-performance-
obses...](https://aerotwist.com/blog/polymer-for-the-performance-obsessed/)

[3]
[https://www.youtube.com/watch?v=1OeXsL5mr4g](https://www.youtube.com/watch?v=1OeXsL5mr4g)

~~~
Lazare
Yeah. I mean...

First off, the attraction of React, to me, is mostly based on being able to
write maintainable, testable, reusable, easy to reason about code when working
on large, complex webapps. Yeah, React has _relatively_ high performance—at
least when compared to a sluggish framework like Angular when writing those
large apps, but that's not even the core selling point (as far as I'm
concerned). But okay, _part_ of the attraction of React could be summed up as
"better performance than Angular on complex applications", and I guess a
benchmark proving (or disproving) that would be nice to have.

That's not what we got. Instead, this guy tested the raw performance of some
very simple code compared the sort of vanilla code you'd never ever ever write
in the sort of app that React (or Angular) is actually designed to help with.
So...yes, React is slower than vanilla JS at stuff that vanilla JS is faster
at. Shocking.

I'm struggling to think of a less meaningful way to do the analysis.

~~~
mainguy
Said every developer who's never had to "just speed up" an application written
with a framework that makes development/debugging easy, but has fundamental
performance issues that cannot be overcome without a significant overhaul.

------
nchudleigh
Posting benchmarking without full source code is bad. And you should feel bad.

I remember seeing people bashing angular 1.2/3 for its speed and then looking
at their source and they weren't using some of the most powerful features
_cough cough react conf_
([https://youtu.be/z5e7kWSHWTg?t=327](https://youtu.be/z5e7kWSHWTg?t=327))
This ended up corrected at some point by some one who looked at the source
code, forked it and made it perform on par with the react version.

I don't doubt the vanilla JS is faster (in this case) but really, you need to
make your source available before posting to your blog. Benchmarking in a fair
way is hard, and you are damaging the public perception of a (from what I
hear) a great framework. Maybe this is justified, but at least give the
fanboys a chance to call you out - maybe everyone will learn something new.

------
dgreensp
Frustratingly, we can't actually look at the React code he used, because it is
Google proprietary code(?). But there are definitely slow and fast ways to
take an array of elements and render it. Ideally, each element would be
wrapped in a component with a "key" property to speed up diffing.

Also, the standard way to do infinite scrolling when you care about
performance, especially on mobile, is to reuse a small number of elements,
just enough to cover the screen and then some. You don't actually create 1500
elements.

~~~
lnanek2
Good point about reusing. Re keys, though, he flat out says he uses them at
the bottom.

------
AriaMinaei
Obviously diffing is going to have some overhead.

But this overhead is minimal in most use-cases. The benchmark in this article
is not a real use-case.

If you wanna show 1200 images in a web page, or 1200 elements of any kind in
one web page, then you should only create DOM elements for those of them that
should actually be visible by the user. You read the scroll position and
calculate which ones would be visible in the viewport, create DOM elements for
those, and disregard the rest.

In most real-world applications, this technique would suffice. Your DOM and
vDOM would be small, and you'd only be diffing maybe 5-10 elements at a time.

Although, I can think of use-cases where this technique, and React's style of
coding may not be sufficient. One example is iOS's Photos app. Sometimes it
animates hundreds of elements at a time (where you're viewing photos by year
or location). I guess diffing might not be a fast enough solution for this
use-case.

~~~
bzalasky
Completely agree. I had to implement something like this when I worked at
Stipple in the past, and the biggest problem became optimizing the DOM
interactions (adding and removing images from a masonry feed as the user
scrolled).

Vanilla JS or React, you can't have 1200 images sitting on a web page all at
once without optimizing for what the user is actually looking at and
interacting with.

------
aidenn0
My very React app initially took about 10 seconds to render changes on a
iPhone 5s and a 1st gen Moto X. Also a user with a chromebook complained that
it degraded other aps. I did some digging and found that many of the React
developers use some form of immutability with the pureRender mixin. This is a
_huge_ boost to performance. Doing that alone made it slow but usable, and
then I batched up changes to state, and split parts of the application into
different tabs to improve things even more.

To this day, I still wouldn't be done with the application had I used vanilla
JS, so even with the performance tuning it was worth it, but it was not
without cost.

~~~
JohnHaugeland
i've been doing react for almost two years (since before it was publically
released) and i've never seen anything render that slowly, and i basically
never use purerender

if you share that code with me i will find the actual problem for you, just
because i'm curious. it isn't react.

~~~
aidenn0
It wasn't _just_ react, I was definitely doing unneeded state changes, and I
got it to the point now where there is no noticeable delay.

One big issue was that I wanted dependent fields to update as-you-type, which
also meant validating as-you-type. I added a timer to delay that so that this
wasn't all running on every single key-stroke, but wouldn't update until you
stopped typing.

Reducing the number of dependent values in the DOM tree at a single time
helped a lot too, and there were logical categories to split them into. My
validation code was not particularly optimized either, since it was originally
designed for batch-processing.

I really don't get huge concerns people have with performance. I find chrome's
profiling tools to be very primitive to what I'm used to, but they were more
than adequate for me to steadily improve performance.

There's probably still some improvements to be had, but right now there's no
issues with responsiveness on mobile, so I stopped looking.

~~~
thedudemabry
I can totally empathize with the large form woes, having worked on a couple of
similar client projects with similarly unusual requirements. Angular (the
first version, I hear awesome things about the latest), in particular, fell
down hard when managing a very large number of semi-dependent input fields. We
ended up customizing large swaths of its collection diffing code to gain minor
performance increases before designing a pure JS solution that was very
performant, if very expensive and time-consuming to implement. I have no
reason to think that React would have fared any better in either case.

------
Ciantic
What do we learn here? I don't see a big lesson here, this benchmark is
comparing it to vanilla JS, there is a reason frameworks do exist.

I know this is obvious, but I'll say it anyway benchmark is a bit pointless
against hypothetical alternative, it should be benchmarked against frameworks
trying to solve same problem like WebComponents (Polymer), Angular or other
frameworks.

~~~
matthewmacleod
To be fair, he was pretty clear about the reason for it:

 _I know that React’s performance has been compared to that of other
frameworks, like Angular. What I wanted to do was my own test of it against
plain old vanilla JavaScript…The docs claim that JavaScript is fast, and it’s
meddling with the DOM that’s slow. So for that to be true we should be largely
able to switch out React with something else and see pretty similar
performance characteristics._

IOW, it's a common assumption that React has minimal overhead for DOM
manipulation, and this benchmark suggests that it rapidly becomes performance
constrained on relatively small DOM trees, especially on mobile.

I don't know if this is accurate or not (I suspect there's something a bit
off, because the numbers don't look realistic to me) but it's worth looking at
anyway!

~~~
Ciantic
To me only lesson seems to be, this is how React works. The diffing is known
to work like this, if you are not calculating the diffs beforehand (or use
some non-standard ways to set the state) there is not much one can do besides
obvious things like immutables. Diffing massive tables against each other each
time is sure slow.

Comparing it to vanilla JS is the pointless part to me, that's like comparing
a appending a list, and diff algorithms together, with complicated way to say
it.

~~~
buster
The point of the benchmark is that it's obviously not the DOM manipulation
that is slow but the JS part. So moving as much as possible away from DOM
manipulation to JS heavy lifting can, in theory, result in performance loss.
That's pretty much all the benchmark says (and seems to show). Now, the test
case is some synthetic benchmark, but unless you take some time to program
Facebook in React and in plain JS and do testing, you'll need to take some
shortcuts.

Of course it is a lot of work to diff two DOM trees, but the general
impression on React was that the DOM is soo unbelievably slow that it will
still be much faster to do the diffing (which i never understood, technically,
but what the heck, FB engineers are wizards!). At least, that was my
impression i got from the React announcements. And i don't even code JS
nowadays, so i couldn't care less if you prefer React or Ext or write
everything as Silverlight plugin :P

~~~
davedx
It's a contrived example. The DOM manipulation is fast _in this case_ because
it's a simple appendChild every time. In other cases like where elements in
the middle of a table are updated, you would get into a mess writing vanilla
code, either complexity or performance wise, because you'd have to traverse
the DOM to get to where you need to do updates and do each update
individually. React batches such things together, and does one single update.

~~~
Akkuma
>In other cases like where elements in the middle of a table are updated, you
would get into a mess writing vanilla code, either complexity or performance
wise, because you'd have to traverse the DOM to get to where you need to do
updates and do each update individually.

If you have a table and want to edit information this is trivial to do in a
React like way without React. Edit button stores reference to the row, edit
values, update data store, render out new row element, remove element, insert
new element. Yes, React will generally be less code and the argument becomes
whether this is more complex, which it is, but I'd say neither the complexity
or performance truly suffer. I bet the performance will be faster as you
removed diffing entirely and React will have to perform your logic anyway.

------
bm_i
I'd love to see the full source of the render function(s) here, we may not be
seeing the full story... Are the React children having appropriate `key`
attributes set as the list increases in size? etc

~~~
kinlan
I spoke with Paul, he is using `key` and said he is following best practice as
per the docs. That being said, we do want the code to be open, we just have a
rather heavy OSS process that can take a while to get it out (it's a nut ache
at the moment).

~~~
insin
Is he using shouldComponentUpdate() [1] to avoid diffing existing photos when
new ones are added?

[1] [https://facebook.github.io/react/docs/advanced-
performance.h...](https://facebook.github.io/react/docs/advanced-
performance.html#avoiding-reconciling-the-dom)

~~~
megaman821
It seems like the end result of React's diffing algorithm is going to be an
appendChild. Exactly what is happening in the vanilla JavaScript. But I don't
think the diffing algorithm itself is the reason for the time disparity but
rather React's rebuilding of the tree to diff against the DOM. I agree with
you, shouldComponentUpdate() should massively speed up React.

~~~
d4n3
Right, with lots of elements, React still needs to render and diff each
component which for a lot of elements causes a lot of garbage and diffing
work. ShouldComponentUpdate / PureRenderMixin avoids this altogether when
props/state isn't changed so this gives a significant boost with lots of
elements.

------
arenaninja
I don't mean to call out the test as flawed, but I'm not sure something with
infinite scrolling is the sort of application I would use React for. I use it
a lot for Modals and Forms, but if you're only appending to the DOM and
sometimes clearing it out (assuming you've functionality that 'Clears' the
page once you have some images loaded) , it's a simple enough case for vanilla
JS

------
amelius
In setState() the new photos are appended to the end of a list, and this list
is re-checked in its entirety at every update. So, the React solution is
actually at least O(N^2).

Something to worry about!

~~~
rakoo
I'd love to see a little bit of ImmutableJS added to see how much of a speedup
we have.

~~~
ajanuary
How does ImmutableJS work? If it was turned into a straight list of immutable
records, each comparison would be quicker but the complexity would remain the
same. You'd have to something like turn it into a tree to reduce the
complexity.

~~~
dugmartin
ImmutableJS's lists are collections which are implemented behind the scenes as
trees. When you append to an immutable list a new list is returned so the
comparison in shouldComponentUpdate() becomes simply a reference comparison
between the old and new list (return nextState.list !== this.state.list).

~~~
ajanuary
That doesn't help reduce the computational complexity in this instance. After
appending, the lists are different, so it renders the component, which starts
iterating the children and checking each one in turn to see if it's changed.
To reduce the computational complexity you'd need to do something like walking
the tree that backs the list instead, so you can ask it "did any of the
children under this node change?" which means you can start skipping the
comparisons for big chunks of the list. I don't know if react and immutable.js
can be combined in this way without significantly rewriting you or components.
I suspect not.

------
jnbiche
I agree that he should have waited until the code was open source to post the
benchmarks. It's not cool to post a "negative" benchmark without giving the
framework's authors a chance to correct any misuse (and it sounds like there
may well be some here).

~~~
mainguy
It's only a negative benchmark if you chose to perceive it that way. I think
it's a valid data point, it doesn't say anything (that I perceive)
negative...unless you're really worried about javascript performance of the
framework versus relative to straight javascript.

------
_alexander_
Where is source code ? I need proof that React has performance problems., I
use React.JS for large application with huge logic in one page and all works
pretty fine on mobile devices and desktop ...

~~~
TheCoelacanth
I only see part of the source code, but what is there has two pretty big
performance blunders (no PureRenderMixin, mutable state).

~~~
vcarl
PureRenderMixin and mutable state are still pretty common in React
applications, I think it's realistic not to include them.

------
JohnHaugeland
my god, a library written in javascript is slower than javascript itself? the
library doesn't have negative execution time?

------
d4n3
PureRenderMixin would defenitely help here, even to the point where it's
comparable with the js solution.

Even though keys help here, render is still called for every image every time
and then diffed.

He'd have to extract the image divs to a component and put PureRenderMixin on
it so that the component's render is skipped altogether.

------
colinramsay
I only had a quick look at this but it might be the wrong approach. It looks
like every time a new photo is concat onto the data array, every single node
will be re-rendered, not just the new ones. This won't happen with the raw DOM
one because of the "if (!imageInDOM)" line.

A better way to do this could be with child components with their own unique
key, but since we can't see the render method I'm not sure exactly what's
going on.

nb: I'm not saying React is definitely "fast" here, just pointing out a
potential flaw. I'm also not saying I'm definitely right ;)

------
jguimont
His Vanilla examples is exactly what Ember.js does because it already knows
which data changed and to which node it was linked to so it will diff the vDOM
only for those.

I would love to see the same comparison with Ember.js in the mix.

------
matthewmacleod
Given that Chrome has profiling built right in, shouldn't it be basically no
effort to conclusively find out where time is being spent?

~~~
kinlan
Sure, but I suspect it's a question of whose role it is to do that. We (or at
least Paul) could go back and see if we can submit issues or even fixes to the
product. So someone could do it, just who cares enough I suppose.

~~~
JohnHaugeland
it sort of remains amazing that you are convinced that this casual benchmark
which has already had two severe problems exposed and doesn't show its
approach is not the problem, but rather that it's this library that is in
heavy use and that everyone else seems to not have these problems with

~~~
mainguy
Well, I think the author's point is that the conventional hive mind wisdom in
the community that uses this product has been that DOM manipulation causes
performance problems and the framework is wicked fast and not a source of
performance issues... which may lead naive or new developers to never both to
consider it as a potential source of problems. This very clearly illustrates
it may be possible that you shouldn't always disregard the framework as a
potential source of problems...

------
agmcleod
I've been using a homerbrew "Framework" or sleuth of classes & singletons with
handle bar templates and jquery. It performs fine for a desktop app, i re-
render things as it changes. But React & flux would be a way nicer way to go
about organizing the code and building it. Plus all the stuff i'm re-
rendering, the diffing react would give me would make that more performant.

I think though examples like this are good to show you shouldn't always use a
framework.

Furthermore with mobile, i've used ES6 classes, handlebars for templates,
jquery to drop the html in, and do event bindings. It runs decently well in a
cordova app, but i do find somethings lag. I've done a bit with react in
cordova as well and found similar problems. Getting a speedy mobile app built
with html5 is hard. I really think for a proper experience native is the way
to go. I've been liking react native for a side project of mine, but i'm
starting to consider using Swift instead.

------
JDDunn9
Interesting to see how defensive people are on here about their favorite new
toy. I didn't even find the article that harsh actually. Of course vanilla JS
will outperform any toolkit. If you like React, use it for that reason, not
cause you thought it was magic...

~~~
Offler
Adding 1,200 image components to a web app and never cleaning up the ones that
aren't visible is a poor approach to building a mobile web application.

It doesn't matter what technology you use, it's a bad idea.

So he's basically tested the performance of a badly architected application.

~~~
JDDunn9
Kind of like how React benchmarks like to create tables with hundreds of rows
and dozens of columns to test against other frameworks?

~~~
Offler
You mean a grid? A component one is often asked to add to web applications for
businesses. I've added 4 in the last week alone to apps I've worked on. It's
bread and butter stuff.

~~~
JDDunn9
You shouldn't be making large grids for the web. Terrible practice. You should
paginate.

~~~
Offler
I'm not sure why it's terrible practice, the only reason I can think of is a
performance one. As I said earlier in the thread though that's only a problem
if you create infinite lists and never clean up after yourself.

The correct way to design such components would be to only create DOM for
elements in view. Which is why this test is flawed.

------
Akkuma
I once had an interview with a YC company where I didn't believe React was
necessarily the right fit for applications with simple DOM manipulation, like
one I had worked on. I pointed out that you can outperform it when you know
exactly what needs to happen to the DOM, like in this example, continually
appending new elements. React overall is great, but people need to realize
that there are faster React like frameworks and sometimes it isn't faster;
however, you normally get simpler code to reason about and less code to
maintain.

~~~
d0m
It's always a trade-of between speed of development/maintenance vs
performance. I think React is the best good enough compromise there is. It's
really fast and a joy to work with. But obviously, in certain very specific
cases, vanilla javascript where you have 100% control over the javascript/DOM
will be faster, there's no arguing about that.

------
carterehsmith
Some of the measurements here are bogus.

Consider the chart under "Mobile: Vanilla" heading. It seems to indicate that
the total rendering time _drops_ with the number of pictures. That is _almost_
physically impossible. But it is super easy to get that kind of result in
benchmarks, e.g. by not properly clearing caches between runs.

~~~
nevir
Don't make assumptions.

Here's a possible explanation (but who knows):

At a certain page height, WebKit/Blink may change rendering behavior. It
doesn't need to render elements that are far away from the viewport.

~~~
carterehsmith
You did not think this through, did you. His chart shows that rendering 1000
photos is _faster_ (>3X) than rendering 200 photos. Think about it. How long
it took to render photos 200-1000 (in the 1000 run) ? Negative time? Please
don't tell me that V8 is so awesome that it can run JS in negative time.

~~~
curveship
Easy there. You've misunderstood the benchmark. It shows the time to add 100
images, given the existing number of images. So adding the 900th-1000th image
was ~3x faster than adding the 100th-200th.

~~~
carterehsmith
I realized that, and looked over the thing again, but the measurement still
does not add up - even the author calls it "bamboozle" and says "I have no
idea what’s happened at the end.".

So if the benchmark produces weird results, and is published without source
code, so that the results cannot be reproduced, why would anyone trust it?

Even the author suspects the V8 optimized away the vanilla code... and if that
is what happened, then apples are being compared to oranges and the whole
conclusion is bogus. Which was kind of my point.

