Hacker News new | past | comments | ask | show | jobs | submit login
React in concurrent mode: 2000 state-connected comps re-rendered at 60FPS (twitter.com)
193 points by macando 8 days ago | hide | past | web | favorite | 107 comments





From a marketing standpoint, I'm not sure demos like this that draw comparisons between JavaScript frameworks and 3D game engines are a great idea. The author describes updating 2000 cubes every frame as an "impossible amount of load", and claims that React will soon "run circles around even the best performing manual WebGL apps".

Here's Unity maintaining a smooth framerate while updating three times that many cubes: https://www.youtube.com/watch?v=qVMfKJfsHQg

Unlike the React demo, all of the cubes actually move each frame (it's not "rescheduling" updates to later frames like React Concurrent Mode does), and it's doing a complex physics simulation to decide where the cubes should go.


Yeah, maintaining state for 2000 elements is not a hard problem. I've written CPU particle systems which handle ~10k particles per frame, some of which even run entirely in the browser.

This shot from Super Mario Galaxy simulates around 3,000 particles, on top of all the other bone/joint animations that are happening. Performance like this is possible in a web browser, but you wouldn't think so given how popular React and ThreeJS are.

https://noclip.website/#smg/AstroGalaxy;AAI4t49Qk^u9Ld&YUm,m...


Performance like this is possible in a web browser, but you wouldn't think so given how popular React and ThreeJS are.

Updating the state of thousands of DOM nodes at a steady 16ms per frame in a browser is hard enough that you have to go beyond "just update them all every frame" like in a simple particle system. To go fast you have to move to working out what needs to update based on what changed in the state.

Most web app developers don't want to have to think too hard about how it gets on the screen. They, very reasonably in my opinion, want to work on the app logic itself instead. Unfortunately that's how we end up with janky UIs. If React's concurrent mode can just be fast by default by spreading out updates if the framerate is falling that's good for everyone.


> Updating the state of thousands of DOM nodes at a steady 16ms per frame in a browser is hard enough

No, it's really not. I've done it before, and profiled it. 16ms is a super long time to a computer. You should easily be able to update tens, if not hundreds of thousands of DOM nodes, assuming you aren't doing anything weird like blocking on layout in one of them.

Let's aim for frameworks that are in the ballpark of peak raw performance, before we look to parallelism and multi-threading as a solution.

That someone possibly had the mistaken belief that this was peak performance for a hand-rolled WebGL codebase shows how low and out-of-touch our expectations for performance really are.


> had the mistaken belief that this was peak performance for a hand-rolled WebGL codebase

I totally agree on this.

> assuming you aren't doing anything weird like blocking on layout in one of them.

An assumption that react cannot make.

React's core promise is to make app developers lives easy, not to be performant. Performance is atrocious on the web in general, yet here it is as (one of?) the most dominant app development platforms.

With that easiness what you get is plenty of horrible (from a performance perspective) applications that get their jobs done. Some even very elegantly on many other metrics.

> Let's aim for frameworks that are in the ballpark of peak raw performance

That is a nice aim, but performance can be traded for other things. Immutable data structures will never be as performant as mutable ones, yet plenty of people make the conscious decision of using the former.

I see React's approach similary. I've built plenty of web applications in my life, and very rarely did (non-network) performance become a significant issue. As you said, computers are so powerful nowadays that most people (even devs!) cannot even comprehend it at an intuitive level.

It is always the app's logic, changing business requirements, etc. that bite me in the ass. I want tools that make that part easier. React is definitely one of those tools today. And if they found a way to keep the easiness while being better performance-wise then... yay!


> As you said, computers are so powerful nowadays that most people (even devs!) cannot even comprehend it at an intuitive level.

Yeah, and still a good number of electrons apps arrive to put my CPU at 100% for something as advance as "text editing".

CPU are powerful when used right.


Text editing wasn't the best example here, because it's very technically involved and much more difficult to optimize than you seem to imply. (In fact, VS Code is much faster than any of the popular Java-based IDEs.) But replace "text editing" with "instant messaging" and your point still stands.

That someone possibly had the mistaken belief that this was peak performance for a hand-rolled WebGL codebase..

No one thought that though. This is a demo of good performance using a web framework on top of a WebGL framework. It's showing that a future version of React will make building a solid 60fps web app UI is within the reach of most web developers. Sure, you can hand-roll code to get that performance today if you know how, but this is about putting that performance in the hands of developers who can't (or, more often, aren't given the resources to). To argue that is unnecessary or actually bad is ridiculous. Libraries that make it easier to build better apps are universally good things.


The OP literally tweeted this:

> I'm pretty sure it will run circles around even the best performing manual three or webGL apps out there soon.

https://twitter.com/0xca0a/status/1199997563396612096


Correct; however, the tweet author's mistaken sense of arrogance does not invalidate the points made by your parent comment. To steel-man that argument, having this speed be default behavior allows developers to create much better UX than currently, at least on terms of responsiveness.

Speed should indeed be the default behavior, which is why I'm so upset at React being so slow. It really shouldn't be; computers are fast. We shouldn't have to multi-thread updating 2,000 objects to get performance.

> what you see above [the 2000-cube demo] simply would not be possible without scheduling, that is a fact. you can't expect any renderer to handle that amount of stress, it's a matter of capacity.

https://twitter.com/0xca0a/status/1200121269339000838


> You should easily be able to update tens, if not hundreds of thousands of DOM nodes, assuming you aren't doing anything weird like blocking on layout in one of them

Last I tried that was back in 2013. Haven't tried that particular benchmark since, but have tried others, and I don't believe things have changed much to move that particular needle.

The benchmark was updating the translate3d(random, random, 1px) of multiple 20x20 divs with a black background. It would increase the number of divs until we'd start to have dropped frames.

Obviously no re-layout, no repaint, practically no time spent on the script, and no GC. Compositor only.

Chrome on Windows on a beefed up PC maxed out at around 400 squares (divs).

So no, hundreds of thousands of divs would not be possible. Not on Blink's architecture at least. WebRender is trying to change that, but it's still too early to tell.

I tried the same benchmark with hand-written WebGL, and made sure that the black squares are drawn one-by-one to simulate how Chrome did it (no batched draw calls).

That maxed out at around 1k squares. Obviously all that time was being wasted back-and-forthing between the script and the GPU.

Then tried batching the draw calls (square coords on a large typed array), and that got me to, iirc, millions.

> That someone possibly had the mistaken belief that this was peak performance for a hand-rolled WebGL codebase shows how low and out-of-touch our expectations for performance really are.

True. It is mathematically impossible for react to beat hand-optimized WebGL. I'd guess it won't come close to 1/100 of the raw throughput. It would require a different programming model. And then it would no longer be React.


I really wish more people felt the way you do. You clearly have done the work to prove this to yourself. No one saying otherwise has done that work.

Two things produce conviction in people regarding technical matters: ignorance and experience. You clearly have the experience.

Tech needs more people like you.


That might be true, if this person were not most likely just making things up. Jasper_ is vastly overstating things if not outright lying. Updating that many elements is possible with canvas maybe, but DOM nodes, no.

Here's a dead simple demo in pure vanilla JavaScript: https://brianbeck.com/particles.html

Go ahead and see what number you get to before it drops below 60fps. Personally I have to decrease it to 1000 or so on my 2018 MacBook Pro.

And that's updating a style property that does not cause reflow calculation.

Anyone including Jasper_ is welcome to post a similar demo of their 100x+ performance improvements on this, but I doubt they will.


From my preliminary profile, it's spending an abnormal amount of time recalculating the style, which is bizarre as nothing major has changed. Admittedly, my own experiences have been for large data tables where you tend to reuse cell nodes as data gets repurposed, so styles aren't updated as often. I also used scoped styles to ensure that nodes wouldn't search the whole tree to find their styles (something that has since been removed from Chrome). Supposedly this is replaced now by Display Locking [0], but I wasn't able to get it to work for this example in my limited testing. Going back to my old project, it also seems to have regressed in performance. Grr.

I will admit that DOM is some serious black magic, the browser is an unfortunate moving target, and when I say "should be able to update 10k", I am lying as much blame at the feet of the browser developers as I am at the frameworks.

I would go back and edit my old post now, since you're clearly right in this case, but it seems the time limit on it has run out.

[0] https://github.com/WICG/display-locking/blob/master/README.m...


It's important to note that a large portion of the frame time in that demo is spent in browser code, recalculating styles and redrawing. I modified the demo to display the amount of time spent in the actual function that updates the DOM nodes, and I can crank it to 50,000 before that number exceeds 16 milliseconds: https://fiddle.jshell.net/btzqw47u/show/

For comparison, my quick-and-dirty React version of the demo spends around 100 ms in React/updating DOM elements when you have 50,000 particles: https://jsfiddle.net/t6js8k5e/ (If you try that one at home, be warned that the browser might hang for a few seconds when you click the "Animate" button.)

I'm certainly not a web performance expert by any means, but this seems to support Jasper_'s assertion that you can make tens of thousands of DOM updates in 16 ms and that frameworks add substantial overhead to this.

(And FWIW, my modified non-React version of the demo maintains 60fps with 2,000 particles in Safari on my MacBook Pro from 2015. Admittedly, that's without many other programs/tabs running.)


> a large portion of the frame time in that demo is spent in browser code, recalculating styles and redrawing

This was my point actually, not that React adds no overhead which of course it does.

Updating a property on a bunch of DOM nodes, without actually waiting for their effects to be applied by the browser, is of course very fast. But the browser code (reflow, paint, etc.) is part of the frame budget, and the number of nodes you can change – accounting for those changes actually appearing on screen – within that budget is embarrassingly small regardless of whether you are doing raw DOM operations vs. using a framework like React.

It doesn't really count to say "ah yes, but the code that technically made the update was fast!" when the updates haven't actually been committed to the screen yet.


Ah, indeed, it seems like modifying transform is a lot more expensive than modifying left, for some reason. I didn't think to check that and assumed they were roughly equivalent. I'm guessing browsers do extra work on setting transform to recalculate style. What a bizarre result. Nice work!

> Anyone including Jasper_ is welcome to post a similar demo of their 100x+ performance improvements on this, but I doubt they will.

Are you kidding? Look at any modern 3D video game on any platform, INCLUDING WebGL.


I think you missed what the conversation was about.

> You should easily be able to update tens, if not hundreds of thousands of DOM nodes

We're talking about updating multiple DOM nodes dude, not a 3D rendering engine or a single canvas element. People don't make React websites where everything is drawn in canvas.


Not yet, just wait for Flash Fenix to become widespread.

Yup, and I got a few downvotes for my trouble.

This is a great example of why you shouldn’t put more stock in someone’s argument simply because they claim to have evidence corroborating their argument. What’s important is to see the evidence.

(See Jasper’s later comment that the described scenario was in fact a very specific example that probably doesn’t extend to many very common simple DOM manipulations and might have relied on a browser-specific optimization that no longer exists.)


Unfortunately tech also lacks people with vision, those that have the gut feeling to believe something is possible without having to be proven all the time how to get it done.

> https://noclip.website

Very cool website! I just noticed you're the one behind it[1]! Just curious: when people contribute textures toy our project, do they have to get permission from the video game publisher first?

Also, how hard is the process of extracting the textures of a video game? Is there effort involved in stitching multiple DDS/other texture files together, so that it can presentable on your website?

[1] https://github.com/magcius/noclip.website


The browser reads as close to raw game data as it can get (I have written my own data extractors in some cases if the data is too big, or doesn't have a natural filesystem so that people can't download raw ROMs).

In the case of the Galaxy levels, it's reading the raw files as shipped on the Wii disc. I have my own renderer and emulation tech [0]. It's the only way to ensure accuracy to a degree I'm happy with. For textures specifically, they're stored in the Wii's texture formats, so I have my own texture decoder [1].

I did not get permission from the original game developers, but I think I'm doing everything here in good faith, out of love for the source material, and that I'm not replacing the experience of playing the game for yourself. The developers of Fez specifically retweeted a post about it on the official Twitter.

[0] https://github.com/magcius/noclip.website/blob/master/src/gx... [1] https://github.com/magcius/noclip.website/blob/master/src/as...


I'm in a browser window slightly below 1920x1080, with an AMD RX Vega 10 mobile GPU. The page renders at 20-30fps in Chrome, and 10-20fps in Firefox.

If you resize the browser window smaller, does it get faster? If so, sounds like you're GPU limited, which still proves that the bottleneck isn't the CPU, which is what we're talking about here -- CPU performance of React vs. non-React.

Thank you, finally a sane perso who understands how blazing fast computers are

Here's a WebGL+WASM experiment I did a little while ago, all object positions are CPU updated. It goes to a million objects at 60Hz without breaking a sweat on the desktop machine I'm testing here (1 million is where the demo stops adding objects), all very simple and straightforward single-threaded code.

There are some stuttering frames every 300k objects or so, I guess this is when WebGL needs to grow buffers under the hood. Hoping for WebGPU to fix those problems.

https://floooh.github.io/oryol/wasm/Instancing.html

No matter how you look at it, moving a few thousand things around on screen really isn't all that impressive, you just have to know the weaknesses of the platform (WebGL's is a high draw call overhead) and work around them.


Amazing. My MBP/Firefox still had 50% idle with a frame request every 8.6ms at 1M particles. React is a bit of a resource hog, and all the article shows is that modern processors are really powerful.

It was around 16ms on an iPad mini 5. Computers are fast.

This is the only answer I've found in this thread that actually demonstrates the point everyone is making, it's buttery smooth on my iPhone SE (!) up to 300k elements.

Indeed, but we're not even talking about WebGL's high draw call overhead (that overhead is constant between all the demos!), just the overhead of updating ThreeJS's view of the world on the main thread, which should be a matrix mul at the most. The rest of the overhead from ThreeJS rendering, and from WebGL, that's the same in both demos!

Matrix multiplication should not have to be scheduled!


React concurrent makes a lot of sense for UI elements, which are complex and self contained and sporadically updated.

Whereas a game engine is much more about raw speed, and every element is updated on each tick.

I would be surprised if the overhead of React and concurrent mode bookkeeping were worth it for a game engine.


I very much respect and appreciate this perspective, however I think it's important to consider the context of what React is used for. It's made for things like building store fronts and UIs for SaaS companies. Stuff that's list of things, and has some form input. It isn't really used for games. Not to say that React couldn't be used for that, but most people aren't using React while also placing extreme pressure on the GPU.

Also, I don't know if people want an e-shopping experience that has the visual complexity of feeling like they're playing a video game.


It's also used quite extensively for data visualisation roles that D3 has traditionally been king of, in fact often alongside D3's libraries for calculating axis tick marks, colour scales, etc. To that end, you actually do end up with thousands of DOM nodes quite often.

React came out the same year they bought Oculus, so there might be a long term plan to first conquer the world of webdev, get the devs used to that paradigm and then integrate it with VR and funnel the devs to their own platform in the long run.

This could explain the unneccessary complexity of the vdom for normal websites and apps. In VR you need high staedy framerates, the vdom could be used to shorten the round trip from client to server, as some kind of framebuffer.


what? both the DOM and the VDOM are client-side data structures, what are you even talking about

Don't they represent app state?

> It isn't really used for games

React VR is a thing, though.


It's a challenge with any demo, because examples in software are always by definition contrived, or too specific to be relatable. There's always going to be some user base that can relate your example better to their challenges than others.

Does it even matter? React is becoming the defacto standard for the web it seems.

Not sure I understand the achievement.

That type of rendering is more about GPU load than of anything React (DOM) related.

As of DOM rendering of comparable number of nodes then this:

https://terrainformatica.com/2019/07/29/bloomberg-terminal-h...

900 elements re-rendered on each frame of kinematic scroll (60 FPS) with 10% CPU load. And 250 FPS max on typical high-DPI monitor.

In other tests underlying recordset is updated with 25 FPS frequency. The view observes and updates screen for visible rows - 2% CPU for that.

All that in main GUI thread so I do not understand the excitement.


Essentially the reactive state model - which imo is 'fast' to code in - currently has a horrendous limitation on how many components can be reactive.

A naive minesweeper implementation (that is state-connected) will get upset at a hundred cells, and terribly laggy at 900 (a standard 30x30 super expert board).

This means devs have to abandon reactive programming for parts of their code with a non trivial component count.

The demo is showing 'smooth' perf on these kinds of "state-connected" workloads.

The graphics side of things is a partial distraction, mainly it will mean web devs won't have to make the current performance vs ease of development tradeoff in webapps (think boring saas stuff).

The way in which it isn't a distraction is it will make it possible to get past CSS limitations inside of react in a very intuitive way.

As this raises the low bar react had for game perf, we will no doubt see more react games.

But have no doubt - this is about what the current state of reactive programming is capable of off the shelf.


I am amazed at the fact that even going the slow path gets laggy with a mere 900 elements...

What are they doing?!


Ugh, yeah. So this benchmark is comparing apples to oranges.

> If you give it an impossible load, so many render requests that it must choke, it will start to manage these requests to maintain a stable 60fps, by updating components virtually and letting them retain their visual state

Does that mean if you try to update too many things it will simply... not update them in order to maintain 60fps? That does not seem ideal.

In this particular example, a bunch of random objects updated every frame, will that result in a "spiral of death"? Meaning every frame M transform requests are made. But it will only process N (where N < M) requests. Or does it drop parts of the queue if it didn't process a transform before receiving a second update for the same transform?

Is updating 2000 boxes per second really considered an "impossible" amount to update? That seems like a shockingly small number.

Edit: I don’t understand the downvotes. My question on understanding behavior is perfectly reasonable. I don’t understand how this new technology works. What is it doing under the hood?


Is updating 2000 boxes per second really considered an "impossible" amount to update? That seems like a shockingly small number.

That's not what's happening in the demo. It's effectively updating 2000 virtual DOM nodes at 60 frames a second by scheduling updates so as many things are updated in each 16ms frame as possible. It'll scale with the device. If you have a beast of a computer it might update all 2000 every frame. If you're on a $100 smart phone it'll schedule the updates across several frames.

In the demo each node is a three.js box geometry - react-three-fiber uses react's virtual DOM reconciler to update state on three.js things. That doesn't have to be the case though. The nodes could be HTML elements or SVG things or any other browser renderable item. React doesn't care.

What the demo really shows is that concurrent mode React moves the bottleneck out of the framework and back to the browser - how fast the UI can be updated will be down to the browser instead of what the JS framework can do. That's a really big deal. It'll make writing performant UIs a lot easier which is good for everyone.


I believe the question is that if I generate 2000 updates a second, but its applying those 2000 updates over 4 seconds (applying 500 updates a second to maintain 60fps), then

at second 1 I have 2000 updates remaining(+2000 new)

At second 2 I have 3500 updates remaining (+2000 new, -500 processed)

At second 3 I have 5000 updates remaining (+2000 new, -500 processed)

That is, my backlog will indefinitely grow larger, unless it's dropping updates.


Reacts concurrent mode isn't a magic fix for apps that literally want to do more than the computer can cope with. It's a way of distributing changes across frames so the browser isn't doing a huge update in one frame and then idling in the next 5. That's very useful for 99.9% of web UIs right now. If your app is in that 0.1% then you'll still have some perf work to do yourself.

You cannot generate 2000 updates a second if your computer cannot handle them.

Javascript runs a "main loop" that just executes "tasks". Tasks are chunks of synchronous code, and are not preemtible.

Let's say you have a "generator" task that generates 2_000 "update" tasks. The way this works is:

1. "generator" schedules 2_000 "update" tasks to run as soon as possible, and schedules another "generator" task for 1s from now.

2. The browser starts running "update" tasks one after the other as fast as it can.

3a. If the "update" tasks are all done 1s after the previous "generator" task, then the browser will run "generator" again.

3b. If they are not done, the browser will continue running "update" tasks and only invoke "generator" again _after_ it has finished with all the "update" tasks, be it 1 or 100 seconds after the previous "generator" task.


You can. This is what the demo does. The scheduler takes that amount and schedules is, which is the point. Every game engine does that (for instance frustrum culling). Games face an impossible amount of data, and schedule it. This is also true for native dev where you have priority dispatchers and threads. What React does is exciting because it schedules at the very root.

Games don't face an impossible amount of data. It might look like an impossible amount of data, but they play all sorts of cheats. In you have a crowd with 8k people, there might be 50 animations being computed and shared between the rest of the skeletons.

All members of the staff -- environment artists, character artists, level designers, animators, FX artists -- are all very technical and have the power and wisdom to use the framerate wisely.

I'm not even sure why you're bringing up frustum culling -- you're suggesting that we have too many objects to run the cull math on so we schedule across frames? But we don't; that results in visual popping. If culling is a bottleneck, we usually solve by broad-phase data structures like octrees, or ask the artists to condense multiple separate models into one so we have less objects to manage (another big cheat, artist labor).


Frustum culling is completely unrelated to scheduling. It’s just a technique for avoiding spending processing or rendering time on things that can’t be seen by the camera. It existed before games were multithreaded, during the time when they typically had a single main loop.

Games are not using complex scheduling and multithreading to achieve acceptable performance updating 2000 entities. You can achieve that easily on a single thread by laying your data out to take advantage of locality of reference and the CPU cache.


I put cannot in cursive because yes, you can schedule over the processing capacity limits. However, the CPU acts as a rate limiter in and of itself. If your CPU cannot process 2_000 updates per second, you will not be doing more than that, period. You can schedule 4_000 updates, but they will take 2s to process, no way around it. If you keep scheduling over the processing app capabilities, you will just run out of memory to hold the queue. Nobody is interested in doing that.

What games do is fundamentally different than what react is doing here. In games there are many places where you can trade off quality for speed (for instance frustrum culling). You can decide to compute less to fit within your computation time slice. Likewise, physics can take into account the update rate to adjust to the CPU/GPU's processing capability (i.e: you can use a delta_t in your computations to adjust for frame differences).

In the website there are some such opportunities (you could do it with animations for instance), but you cannot do it in general because most apps logic would not tolerate it (yeah sorry I dropped your request callback bad luck). For this reason I would be very surprised if React went that route.


No game engine does that and they don't face an "impossible" amount of data. A game designer may choose to spread work over several frames, but that is done on purpose by the developer.

And frustum culling has nothing to do with this nor it reduces any amount of updates.


If you have 2000 nodes and you generate 2000 updates a second, you will never have more than 2000 things to update per frame, as you only have 2000 nodes. Thus even at second 2 or 3, you will still only have 2000 things to updates, as the older updates are now outdated and has been replaced by something more recent. If you updates only 500 of them, that will means that in average, it will takes 4 seconds before you see the most up to date information for a node.

> If you have a beast of a computer it might update all 2000 every frame.

The point is that you shouldn't need a beast of a computer to update 2,000 boxes. That's way outside of the performance profile I expect from any recent CPU.


Yes, but it scales with the environment and device capabilities. Watch this, it explains it in detail: https://www.youtube.com/watch?v=nLF0n9SACd4

No, the sim itself is not that impressive compared to Unity or similar. They are, however, impressive relative to legacy React, which, as a development modality, has an enormous amount going for it. React and React-likes are winning in the marketplace because the development style it opens up really is that much better than many competitors. Yes, it's average everyday use cases driving that; no, that is nothing to be ashamed of.

There are many reasons you don't write a shopping cart in Unity, but now you can get a taste of performance nevertheless.

Props to the React team. (Yes that is a joke; I am passing props to the React team ;D)


this isn't actually by the React team - it's by one incredible individual, who does open source for free - Paul Henschel

In this post Paul Henschel is actually saying how impressed he is with the work of the react team.

Paul’s work here (react three fiber) is actually a very thin layer on top of react. Which is impressive in itself. In 3 files he was able to bond all of react to all of three JS. Which says a lot on react and on Paul’s work.

But the point is: it’s react general ideas that are at play here


He's also using his own state manager, Zustand which gives a significant speed boost

Rendering "thousands" of anything is usually not how you impress people. The scheduler is still cool though.

Yeah, I'd say around the order of ~10k items is when an O(n^2) algorithm gets noticeably slow. "Thousands" is a shockingly weak payoff for all the effort.

edit: and the real innovation here appears to be writing a multi-threaded scheduler. You should not need to spin up multiple threads to handle 2,000 objects. Something is going seriously wrong perf-wise here.


I was pretty surprised at how my large Vue 2 form ground to a halt when I used it with 1-2 thousand objects. Mostly via many layers deep of small nested JSON objects which grow exponentially really fast ... because customers like jamming as much data as possible into a single HTML page's form instead of breaking it up across multiple forms/parent objects like we intended.

Beyond work-arounds like pagination/hiding data, I've found the best solution is being careful with what data is marked as Observable/reactive and tracked by Vue for changes...which in Vue 2 is the automatic default behaviour where everything put it into data()/computed/Vuex or elsewhere is monitored. Then on top of that functionality which updates tons of rows at once (for example drag and dropping a row which then updates { order: n } for a hundred objects) then resulted in mini-freezes/slow downs.

Crawling through dev tools memory captures never helped much because Vue is already extremely async/concurrent, so there was never one big slow thing to refactor - the problem was many thousands of small things that turned into one big slow thing.

Vue 3 (surprisingly) helped me solve this problem, not by adding some extra concurrency to the view layer, but by making all the data default to a simple JS module, instead of a big options interface in Vue 2 where everything in Observable by default, and instead you explicitly mark which stuff should be reactive and watched. I managed to reduce the number of observed objects by 50-75% in some cases which got rid of any slow-downs.

This of course was a rare use-case and not a knock on Vue, most people aren't managing hundreds of objects in a single HTML UI/form but it also wasn't very extreme either - just a modern web-based data-heavy B2B software requirement.

Not all of it is do to the inherent framework design either but about how you use it + having good best-practices and defaults.

I'm sure there was ways to do the same in Vue 2 but being the default approach significantly helps organize the code in a performant way. Only the data which needs to be reactive will be reactive and everything else is simple fast JS objects.


Use object.freeze in your mutation.

Yeah that was what I was started doing initially before switching to Vue 3. It gets aesthetically ugly pretty fast when every time you declare a variable (or more importantly flagging subsections of bigger objects) you have to put Object.freeze in front of it x100. And make sure any data inserted indirectly into an object afterwards gets frozen as well.

I'd much rather the default be non-observed and the reactive data is explicitly marked, instead of the opposite. Which is a surprisingly under-reported evolution in Vue 3.


Freeze doesn’t play nicely with Vue, in my experience.

> O(n^2) algorithm gets noticeably slow.

What algorithm? If that's about diff/reconciliation of DOM/VDOM then it is worse than that - O(n^3).


Can you point me to a good resource on what the n^3 process is? It’s not intuitive to me why it’s so, ahem, complex. Thanks! :)

I think he's way off on the complexity. He might be confusing it with text diffs where you need to find the longest common subsequence for the shortest possible diff but that is still O(N^2). If you are just diffing a list with unique ids (like react child nodes are, with their required key prop), it's just a set intersection which is linear.

I can see it being a bit more complex if you need to track how nodes move across different parents. But it seems like react doesn't handle this: https://github.com/facebook/react/issues/3965#


The O(n^3) is to compute the minimum set of operations to transform one VDOM tree into another VDOM tree. React apparently uses simple heuristics to generate a diff in linear time.

https://reactjs.org/docs/reconciliation.html


VDOM is O(n).

I suspect you never had to deal with the days of AngularJS, where just rendering a large table-style view with a thousand interactive elements could slow browsers to a crawl.

I think the comment is referring to the state of the front end in context of computing in general.

In a broader context those metrics are pathetic. Your OS for example has UI performance light years faster.



What are "comps"? (much less "state-connected comps")?

comps: components

state-connected: components with changeable internal state (as distinct from stateless components that always render the same way based on their inputs)


I assumed he was talking about components.

The scheduling and ability to provide an app that behaves as users expect / based on user research is probably more important than any numbers.

The day ignorant people will stop discovering new stupid ways to solve trivial problems and then market them to their undiscerning fellows will be a great day for humanity. Unfortunately I will not live to see that day.

Yes this is a sour comment, made by an elitist that can no longer muster the energy to be compassionate and walk towards the light people that seem to have taken a vow not to experiment, evaluate, judge and, most importantly, use their brain and previous experience.

All the offensive words in this post have been chosen with care for their dictionary definition. Now go ahead, complain and downvote.


I am quite offended that this guy complains about people who claim that they have great benchmark scores and then he does the very same. Can't we make buzz about achievements without attacking others?

How does the scheduler achieve preemption? How does it decide when to render and when to allow the event loop to run?


Thanks. This really drives home how insane the browser architecture is.

React would like to pause rendering to react to new events as they come in. However, there is no way for React to be told when there is a new event, so it has to poll for it. Unfortunately there's no way to directly poll for a new event, so it has to fake an event poll via inversion of control built on top of requestAnimationFrame.


Yes, but the React team is working together with the Chrome team to improve it. There is an `isInputPending` proposal https://techcrunch.com/2019/04/22/facebook-makes-its-first-b...

I wrote something like this many years ago and all I did was queue stuff to be processed simultaneously.

we have an ongoing discussion at /r/reactjs if anyone is interested as well: https://www.reddit.com/r/reactjs/comments/e43l6w/rich_harris...

This is rendering 3d shapes in threejs, not doing anything in the DOM, so this is quite a nothingburger isn't it?

The thing that excites me is using the react component paradigm for non DOM stuff like canvas and webGL.

Unfortunately, history shows us that it's a terrible idea from a performance perspective.

It's also a very strange way to write a renderer, since you want your renderer to be pass-based, rather than object-based. React + Three.JS is just the wrong level of abstraction, since Three.JS a scene graph toolkit.

React has hooks. They are essentially algebraic effects. Check out some of the demos here: https://github.com/react-spring/react-three-fiber and look for "useFrame".

useFrame binds a single component to the render-loop. But in a managed way. Once the component unmount, it's getting taken out. You can also stack calls like that with something like a z-index, which is awesome for effects.

This game uses some of it: https://codesandbox.io/embed/react-three-fiber-untitled-game...


Flutter looks like it could be quite interesting for this - on mobile it compiles over to native machine code and uses GL or whatever.

I haven't looked at the details for web, but I think it does webasm and canvas, etc. and with the architecture they have it could potentially be a lot faster than traditional JS frameworks https://flutter.dev/web


Why is this exciting?

You tell react what you want your scene graph to look like in the end. You don’t have to write down how to update. Which removes a whole class of bugs AND simplifies code A LOT

I'm excited to see how much this will make 2d games possible with react.

I really, really hope nobody writes games in React.

It is such a bad fit for games and it is very disrespectful of the environment and your users' battery.


Remains to be seen, imho. Of course I'm talking about casual games with light 2d sprites/images.

What do you expect from React in 2D games?

React is just a procedural method of UI rendering. That's what pretty much all games do already. So is the question.


React is not procedural. React is declarative. React is a way to turn imperative APIs into declarative APIs that are much easier to reason about. So yes, this is a big deal in my opinion.

Check out this: https://codesandbox.io/embed/react-three-fiber-untitled-game...

Or some of the other demos here: https://github.com/react-spring/react-three-fiber

React lends extremely well to games, especially with hooks. Now you have reactive components, but also algebraic effects.


I was thinking 2d games but woah, very impressive.

Many games are also a procedural method of UI rendering. Think about puzzle games, or even tetris. From a high level they're just complicated buttons. Of course it wouldn't be a good fit for anything that is highly interactive or 3d.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: