
React in concurrent mode: 2000 state-connected comps re-rendered at 60FPS - macando
https://twitter.com/0xca0a/status/1199997552466288641
======
csande17
From a marketing standpoint, I'm not sure demos like this that draw
comparisons between JavaScript frameworks and 3D game engines are a great
idea. The author describes updating 2000 cubes every frame as an "impossible
amount of load", and claims that React will soon "run circles around even the
best performing manual WebGL apps".

Here's Unity maintaining a smooth framerate while updating three times that
many cubes:
[https://www.youtube.com/watch?v=qVMfKJfsHQg](https://www.youtube.com/watch?v=qVMfKJfsHQg)

Unlike the React demo, all of the cubes actually move each frame (it's not
"rescheduling" updates to later frames like React Concurrent Mode does), and
it's doing a complex physics simulation to decide where the cubes should go.

~~~
Jasper_
Yeah, maintaining state for 2000 elements is not a hard problem. I've written
CPU particle systems which handle ~10k particles per frame, some of which even
run entirely in the browser.

This shot from Super Mario Galaxy simulates around 3,000 particles, on top of
all the other bone/joint animations that are happening. Performance like this
is possible in a web browser, but you wouldn't think so given how popular
React and ThreeJS are.

[https://noclip.website/#smg/AstroGalaxy;AAI4t49Qk^u9Ld&YUm,m...](https://noclip.website/#smg/AstroGalaxy;AAI4t49Qk^u9Ld&YUm,m_W-
zy_4~\(x_UcUO6UP@\]=WR2Z39kY1tTqnNC9G:U0+^8)

~~~
onion2k
_Performance like this is possible in a web browser, but you wouldn 't think
so given how popular React and ThreeJS are._

Updating the state of thousands of DOM nodes at a steady 16ms per frame in a
browser is hard enough that you have to go beyond "just update them all every
frame" like in a simple particle system. To go fast you have to move to
working out what needs to update based on what changed in the state.

Most web app developers don't want to have to think too hard about how it gets
on the screen. They, very reasonably in my opinion, want to work on the app
logic itself instead. Unfortunately that's how we end up with janky UIs. If
React's concurrent mode can just be fast _by default_ by spreading out updates
if the framerate is falling that's good for everyone.

~~~
Jasper_
> Updating the state of thousands of DOM nodes at a steady 16ms per frame in a
> browser is hard enough

No, it's really not. I've done it before, and profiled it. 16ms is a super
long time to a computer. You should easily be able to update tens, if not
hundreds of thousands of DOM nodes, assuming you aren't doing anything weird
like blocking on layout in one of them.

Let's aim for frameworks that are in the ballpark of peak raw performance,
before we look to parallelism and multi-threading as a solution.

That someone possibly had the mistaken belief that this was peak performance
for a hand-rolled WebGL codebase shows how low and out-of-touch our
expectations for performance really are.

~~~
naikrovek
I really wish more people felt the way you do. You clearly have done the work
to prove this to yourself. No one saying otherwise has done that work.

Two things produce conviction in people regarding technical matters: ignorance
and experience. You clearly have the experience.

Tech needs more people like you.

~~~
exogen
That might be true, if this person were not most likely just making things up.
Jasper_ is vastly overstating things if not outright lying. Updating that many
elements is possible with canvas maybe, but DOM nodes, no.

Here's a dead simple demo in pure vanilla JavaScript:
[https://brianbeck.com/particles.html](https://brianbeck.com/particles.html)

Go ahead and see what number you get to before it drops below 60fps.
Personally I have to _decrease_ it to 1000 or so on my 2018 MacBook Pro.

And that's updating a style property that does _not_ cause reflow calculation.

Anyone including Jasper_ is welcome to post a similar demo of their 100x+
performance improvements on this, but I doubt they will.

~~~
csande17
It's important to note that a large portion of the frame time in that demo is
spent in browser code, recalculating styles and redrawing. I modified the demo
to display the amount of time spent in the actual function that updates the
DOM nodes, and I can crank it to 50,000 before that number exceeds 16
milliseconds:
[https://fiddle.jshell.net/btzqw47u/show/](https://fiddle.jshell.net/btzqw47u/show/)

For comparison, my quick-and-dirty React version of the demo spends around 100
ms in React/updating DOM elements when you have 50,000 particles:
[https://jsfiddle.net/t6js8k5e/](https://jsfiddle.net/t6js8k5e/) (If you try
that one at home, be warned that the browser might hang for a few seconds when
you click the "Animate" button.)

I'm certainly not a web performance expert by any means, but this seems to
support Jasper_'s assertion that you can make tens of thousands of DOM updates
in 16 ms and that frameworks add substantial overhead to this.

(And FWIW, my modified non-React version of the demo maintains 60fps with
2,000 particles in Safari on my MacBook Pro from 2015. Admittedly, that's
without many other programs/tabs running.)

~~~
exogen
> a large portion of the frame time in that demo is spent in browser code,
> recalculating styles and redrawing

This was my point actually, not that React adds no overhead which of course it
does.

Updating a property on a bunch of DOM nodes, without actually waiting for
their effects to be applied by the browser, is of course very fast. But the
browser code (reflow, paint, etc.) _is part of the frame budget_ , and the
number of nodes you can change – _accounting for those changes actually
appearing on screen_ – within that budget is embarrassingly small regardless
of whether you are doing raw DOM operations vs. using a framework like React.

It doesn't really count to say "ah yes, but the code that technically made the
update was fast!" when the updates haven't actually been committed to the
screen yet.

------
c-smile
Not sure I understand the achievement.

That type of rendering is more about GPU load than of anything React (DOM)
related.

As of DOM rendering of comparable number of nodes then this:

[https://terrainformatica.com/2019/07/29/bloomberg-
terminal-h...](https://terrainformatica.com/2019/07/29/bloomberg-terminal-how-
i-would-it-with-sciter/)

900 elements re-rendered on each frame of kinematic scroll (60 FPS) with 10%
CPU load. And 250 FPS max on typical high-DPI monitor.

In other tests underlying recordset is updated with 25 FPS frequency. The view
observes and updates screen for visible rows - 2% CPU for that.

All that in main GUI thread so I do not understand the excitement.

~~~
Normal_gaussian
Essentially the reactive state model - which imo is 'fast' to code in -
currently has a horrendous limitation on how many components can be reactive.

A naive minesweeper implementation (that is state-connected) will get upset at
a hundred cells, and terribly laggy at 900 (a standard 30x30 super expert
board).

This means devs have to abandon reactive programming for parts of their code
with a non trivial component count.

The demo is showing 'smooth' perf on these kinds of "state-connected"
workloads.

The graphics side of things is a partial distraction, mainly it will mean web
devs won't have to make the current performance vs ease of development
tradeoff in webapps (think boring saas stuff).

The way in which it isn't a distraction is it will make it possible to get
past CSS limitations inside of react in a very intuitive way.

As this raises the low bar react had for game perf, we will no doubt see more
react games.

But have no doubt - this is about what the current state of reactive
programming is capable of off the shelf.

~~~
gdxhyrd
I am amazed at the fact that even going the slow path gets laggy with a mere
900 elements...

What are they doing?!

------
forrestthewoods
> If you give it an impossible load, so many render requests that it must
> choke, it will start to manage these requests to maintain a stable 60fps, by
> updating components virtually and letting them retain their visual state

Does that mean if you try to update too many things it will simply... not
update them in order to maintain 60fps? That does not seem ideal.

In this particular example, a bunch of random objects updated every frame,
will that result in a "spiral of death"? Meaning every frame M transform
requests are made. But it will only process N (where N < M) requests. Or does
it drop parts of the queue if it didn't process a transform before receiving a
second update for the same transform?

Is updating 2000 boxes per second really considered an "impossible" amount to
update? That seems like a shockingly small number.

Edit: I don’t understand the downvotes. My question on understanding behavior
is perfectly reasonable. I don’t understand how this new technology works.
What is it doing under the hood?

~~~
onion2k
_Is updating 2000 boxes per second really considered an "impossible" amount to
update? That seems like a shockingly small number._

That's not what's happening in the demo. It's _effectively_ updating 2000
virtual DOM nodes at 60 frames a second by scheduling updates so as many
things are updated in each 16ms frame as possible. It'll scale with the
device. If you have a beast of a computer it might update all 2000 every
frame. If you're on a $100 smart phone it'll schedule the updates across
several frames.

In the demo each node is a three.js box geometry - react-three-fiber uses
react's virtual DOM reconciler to update state on three.js things. That
doesn't have to be the case though. The nodes could be HTML elements or SVG
things or any other browser renderable item. React doesn't care.

What the demo really shows is that concurrent mode React moves the bottleneck
out of the framework and back to the browser - how fast the UI can be updated
will be down to the browser instead of what the JS framework can do. That's a
really big deal. It'll make writing performant UIs a lot easier which is good
for everyone.

~~~
setr
I believe the question is that if I generate 2000 updates a second, but its
applying those 2000 updates over 4 seconds (applying 500 updates a second to
maintain 60fps), then

at second 1 I have 2000 updates remaining(+2000 new)

At second 2 I have 3500 updates remaining (+2000 new, -500 processed)

At second 3 I have 5000 updates remaining (+2000 new, -500 processed)

That is, my backlog will indefinitely grow larger, unless it's dropping
updates.

~~~
kilburn
You _cannot_ generate 2000 updates a second if your computer cannot handle
them.

Javascript runs a "main loop" that just executes "tasks". Tasks are chunks of
synchronous code, and are not preemtible.

Let's say you have a "generator" task that generates 2_000 "update" tasks. The
way this works is:

1\. "generator" schedules 2_000 "update" tasks to run as soon as possible, and
schedules another "generator" task for 1s from now.

2\. The browser starts running "update" tasks one after the other as fast as
it can.

3a. If the "update" tasks are all done 1s after the previous "generator" task,
then the browser will run "generator" again.

3b. If they are not done, the browser will continue running "update" tasks and
only invoke "generator" again _after_ it has finished with all the "update"
tasks, be it 1 or 100 seconds after the previous "generator" task.

~~~
mlsarecmg
You can. This is what the demo does. The scheduler takes that amount and
schedules is, which is the point. Every game engine does that (for instance
frustrum culling). Games face an impossible amount of data, and schedule it.
This is also true for native dev where you have priority dispatchers and
threads. What React does is exciting because it schedules at the very root.

~~~
Jasper_
Games don't face an impossible amount of data. It might look like an
impossible amount of data, but they play all sorts of cheats. In you have a
crowd with 8k people, there might be 50 animations being computed and shared
between the rest of the skeletons.

All members of the staff -- environment artists, character artists, level
designers, animators, FX artists -- are all very technical and have the power
and wisdom to use the framerate wisely.

I'm not even sure why you're bringing up frustum culling -- you're suggesting
that we have too many objects to run the cull math on so we schedule across
frames? But we don't; that results in visual popping. If culling is a
bottleneck, we usually solve by broad-phase data structures like octrees, or
ask the artists to condense multiple separate models into one so we have less
objects to manage (another big cheat, artist labor).

------
undoware
No, the sim itself is not that impressive compared to Unity or similar. They
are, however, impressive relative to legacy React, which, as a development
modality, has an enormous amount going for it. React and React-likes are
winning in the marketplace because the development style it opens up really is
that much better than many competitors. Yes, it's average everyday use cases
driving that; no, that is nothing to be ashamed of.

There are many reasons you don't write a shopping cart in Unity, but now you
can get a taste of performance nevertheless.

Props to the React team. (Yes that is a joke; I am passing props to the React
team ;D)

~~~
dharma1
this isn't actually by the React team - it's by one incredible individual, who
does open source for free - Paul Henschel

~~~
crubier
In this post Paul Henschel is actually saying how impressed he is with the
work of the react team.

Paul’s work here (react three fiber) is actually a very thin layer on top of
react. Which is impressive in itself. In 3 files he was able to bond all of
react to all of three JS. Which says a lot on react and on Paul’s work.

But the point is: it’s react general ideas that are at play here

~~~
dharma1
He's also using his own state manager, Zustand which gives a significant speed
boost

------
fyp
Rendering "thousands" of anything is usually not how you impress people. The
scheduler is still cool though.

~~~
Jasper_
Yeah, I'd say around the order of ~10k items is when an O(n^2) algorithm gets
noticeably slow. "Thousands" is a shockingly weak payoff for all the effort.

edit: and the real innovation here appears to be writing a multi-threaded
scheduler. You should not need to spin up multiple threads to handle 2,000
objects. Something is going seriously wrong perf-wise here.

~~~
c-smile
> O(n^2) algorithm gets noticeably slow.

What algorithm? If that's about diff/reconciliation of DOM/VDOM then it is
worse than that - O(n^3).

~~~
forrestthewoods
Can you point me to a good resource on what the n^3 process is? It’s not
intuitive to me why it’s so, ahem, complex. Thanks! :)

~~~
fyp
I think he's way off on the complexity. He might be confusing it with text
diffs where you need to find the longest common subsequence for the shortest
possible diff but that is still O(N^2). If you are just diffing a list with
unique ids (like react child nodes are, with their required key prop), it's
just a set intersection which is linear.

I can see it being a bit more complex if you need to track how nodes move
across different parents. But it seems like react doesn't handle this:
[https://github.com/facebook/react/issues/3965#](https://github.com/facebook/react/issues/3965#)

~~~
baddox
The O(n^3) is to compute the minimum set of operations to transform one VDOM
tree into another VDOM tree. React apparently uses simple heuristics to
generate a diff in linear time.

[https://reactjs.org/docs/reconciliation.html](https://reactjs.org/docs/reconciliation.html)

------
_august
[https://reactjs.org/docs/concurrent-mode-
intro.html](https://reactjs.org/docs/concurrent-mode-intro.html)

------
armagon
What are "comps"? (much less "state-connected comps")?

~~~
crooked-v
comps: components

state-connected: components with changeable internal state (as distinct from
stateless components that always render the same way based on their inputs)

------
duxup
The scheduling and ability to provide an app that behaves as users expect /
based on user research is probably more important than any numbers.

------
francasso
The day ignorant people will stop discovering new stupid ways to solve trivial
problems and then market them to their undiscerning fellows will be a great
day for humanity. Unfortunately I will not live to see that day.

Yes this is a sour comment, made by an elitist that can no longer muster the
energy to be compassionate and walk towards the light people that seem to have
taken a vow not to experiment, evaluate, judge and, most importantly, use
their brain and previous experience.

All the offensive words in this post have been chosen with care for their
dictionary definition. Now go ahead, complain and downvote.

------
gcpwnd
I am quite offended that this guy complains about people who claim that they
have great benchmark scores and then he does the very same. Can't we make buzz
about achievements without attacking others?

------
millstone
How does the scheduler achieve preemption? How does it decide when to render
and when to allow the event loop to run?

~~~
pomber
See [https://pomb.us/build-your-own-react/#step-iii-concurrent-
mo...](https://pomb.us/build-your-own-react/#step-iii-concurrent-mode)

~~~
millstone
Thanks. This really drives home how insane the browser architecture is.

React would like to pause rendering to react to new events as they come in.
However, there is no way for React to be told when there is a new event, so it
has to poll for it. Unfortunately there's no way to directly poll for a new
event, so it has to fake an event poll via inversion of control built on top
of requestAnimationFrame.

~~~
pomber
Yes, but the React team is working together with the Chrome team to improve
it. There is an `isInputPending` proposal
[https://techcrunch.com/2019/04/22/facebook-makes-its-
first-b...](https://techcrunch.com/2019/04/22/facebook-makes-its-first-
browser-api-contribution/)

------
swyx
we have an ongoing discussion at /r/reactjs if anyone is interested as well:
[https://www.reddit.com/r/reactjs/comments/e43l6w/rich_harris...](https://www.reddit.com/r/reactjs/comments/e43l6w/rich_harris_implements_the_round_react_demo_with/)

------
fenwick67
This is rendering 3d shapes in threejs, not doing anything in the DOM, so this
is quite a nothingburger isn't it?

~~~
Waterluvian
The thing that excites me is using the react component paradigm for non DOM
stuff like canvas and webGL.

~~~
ioquatix
Unfortunately, history shows us that it's a terrible idea from a performance
perspective.

~~~
Jasper_
It's also a very strange way to write a renderer, since you want your renderer
to be pass-based, rather than object-based. React + Three.JS is just the wrong
level of abstraction, since Three.JS a scene graph toolkit.

~~~
mlsarecmg
React has hooks. They are essentially algebraic effects. Check out some of the
demos here: [https://github.com/react-spring/react-three-
fiber](https://github.com/react-spring/react-three-fiber) and look for
"useFrame".

useFrame binds a single component to the render-loop. But in a managed way.
Once the component unmount, it's getting taken out. You can also stack calls
like that with something like a z-index, which is awesome for effects.

This game uses some of it: [https://codesandbox.io/embed/react-three-fiber-
untitled-game...](https://codesandbox.io/embed/react-three-fiber-untitled-
game-i2160)

------
proc0
I'm excited to see how much this will make 2d games possible with react.

~~~
c-smile
What do you expect from React in 2D games?

React is just a procedural method of UI rendering. That's what pretty much all
games do already. So is the question.

~~~
mlsarecmg
Check out this: [https://codesandbox.io/embed/react-three-fiber-untitled-
game...](https://codesandbox.io/embed/react-three-fiber-untitled-game-i2160)

Or some of the other demos here: [https://github.com/react-spring/react-three-
fiber](https://github.com/react-spring/react-three-fiber)

React lends extremely well to games, especially with hooks. Now you have
reactive components, but also algebraic effects.

~~~
proc0
I was thinking 2d games but woah, very impressive.

