Hacker News new | comments | show | ask | jobs | submit login
React Performance (aerotwist.com)
145 points by teamhappy on July 3, 2015 | hide | past | web | favorite | 89 comments

I think this is a contrived, or over-simplified, example.

The DOM manipulation is fast in this case because it's a simple appendChild every time. In other cases like where elements in the middle of a table are updated, you would get into a mess writing vanilla code, either complexity or performance wise, because you'd have to traverse the DOM to get to where you need to do updates and do each update individually. React batches such things together, and does one single update.

Show me a benchmark of an actual real app written in vanilla JS and React. I suspect the DOM manipulation time would be way higher.

It's not just simple appendChild calls. I actually worked on an app which updated a large table – displaying file metadata, checksums calculated in web workers, etc. for a delivery – and found React to be around 40+ times slower than using the DOM[1] or even simply using innerHTML, getting worse as the number of records increased.

The main trap you're falling prey to is the magical thinking which is sadly prevalent about the virtual DOM and batching. Basic application of Amdahl's law tells us that the only way the React approach can be faster is if the overhead of the virtual DOM and framework code is balanced out by being able to do less work. That's true if you're comparing to, say, a primitive JavaScript framework which performs many unnecessary updates (e.g. re-rendering the entire table every time something changes) or if the React abstractions allow you to make game-changing optimizations which would be too hard for you to make in regular code.

Since you mentioned batching, here's a simple example: it's extremely hard to find a case where a single update will be faster because the combined time to execute a JS framework and make an update is always going to be greater than simply making the update directly. If, however, you're making multiple updates it's easy to hit pathologically bad performance due to layout thrashing[2] when the code performing an update reads something from the DOM which was invalidated by an earlier update, requiring the browser to repeatedly recalculate the layout.

That can be avoided in pure JavaScript by carefully structuring the application to avoid that write-read-write cycle or by using a minimalist library like Wilson Page's fastdom[3]. This is quite efficient but can be harder to manage in a large application and that's where React can help by making that kind of structure easier to code. If you are looking for a benchmark where React will perform well, that's the area I'd focus on and do by looking at both the total amount of code and the degree to which performance optimizations interfere with clean separation, testability, etc.

EDIT: just to be clear, I'm not saying that it's wrong to use React but that the reasons you do so are the same as why we're not writing desktop apps entirely in assembly: it takes less time to build richer, more maintainable apps. The majority of web apps are not going to be limited by how quickly any framework can update the DOM.

1. I partially reduced that to a smaller testcase in https://gist.github.com/acdha/092c6d79f9ebb888496c which could use more work. For simple testing that was using JSX inline but the actual real application used a separate JSX file compiled following normal React practice.

2. See e.g. http://wilsonpage.co.uk/preventing-layout-thrashing/

3. https://github.com/wilsonpage/fastdom

React allows you to optimize as much as you want while still keeping the component nature. In this case, you realize you need infinite appends.

You make a component that puts a reference div into the DOM. Next you override the default shouldComponentUpdate so when you get new data, you create the raw DOM elements using a document fragment and this.refs['elem'].appendChild(newDom) -- 0.14 syntax.

Premature optimization is usually bad. React allows you to write your app and then go as deep as necessary when optimizing later. The fact that you can do this while still keeping within react is a testament to the power of the framework/library.

The fact that a Google employee who pushed web components has a problem with a framework he doesn't know in a case that should usually be avoided without the optimizations that are possible says more about him than the framework he is criticizing.

> The fact that a Google employee who pushed web components has a problem with a framework he doesn't know in a case that should usually be avoided without the optimizations that are possible says more about him than the framework he is criticizing.

Or possibly that you haven't paid enough attention to what he wrote. He was very clear to mention that React's productivity wins are significant but wanted to make a point about how important it is to regularly test performance rather than just assuming hype is universally true – or, conversely, that people talking about native browser performance have done the broad, valid benchmarking needed to support sweeping general claims.

As example of the difference, you're reacting defensively trying to downplay real concerns which are easily encountered on any large project and attack the source rather than engage with the actual demonstrated problem. That might feel good but unfortunately problems aren't fixed by pointing out that the reporter works for what you perceive as The Other Team.

I'm glad to see actual React developers are responding differently by trying to improve performance on weak points:


This is clearly the effect Paul Lewis wanted from his post and it's the one which improves things for everyone who uses React.

I am not sure you are ever going to get your benchmark with fully written apps, it's a lot of effort going both ways to build it out twice.

With pure DOM, traversing the DOM isn't the issue, you can keep references around if you really need it, or indexes or any other number of abstractions to make it perform well because you probably wouldn't build it any other way and you probably would in most cases end up with a set of primatives that do that same type of batching the updates.

Yeah, but this is significantly worse than performance comparisons that have already been done. See here for example:


TodoMVC is way more realistic because items are actually edited, you are not just infinitely adding DOM elements. (Why would you even do that, instead of just showing the number of elements that can fit on the screen, then just reusing those?).

This smells like a shill piece by a Google developer advocate [1] with a horse in the race [2].

The key benefit of React is an extremely low cognitive load. There are only three simple concepts to grok (props, state and lifecycle) to get productive. The code is very easy to reason about (components are essentially pure functions of props and state) and debugging is much simpler than with vanilla JS or jQuery on a project of any meaningful size.

With respect to performance, React shines when it comes to DOM mutations (e.g. removing a div from DOM, creating a new div, inserting new div into the DOM) which is what you generally encounter with the real-world load. Here is a demo illustrating such load [3]. Benchmark offered by OP is amazingly contrived (actually it feels designed to show React in a bad light and lack of full source code is very telling). I struggle to think of a real-world scenario of append-only page with 1000+ images in the DOM, there is simply no valid reason to do that. React in turn makes it really easy and fast to implement infinite scrolling (similar to UITableView) and there are a couple of good open source components that address that.

[1] See the bottom of https://aerotwist.com/

[2] https://aerotwist.com/blog/polymer-for-the-performance-obses...

[3] https://www.youtube.com/watch?v=1OeXsL5mr4g

Yeah. I mean...

First off, the attraction of React, to me, is mostly based on being able to write maintainable, testable, reusable, easy to reason about code when working on large, complex webapps. Yeah, React has relatively high performance—at least when compared to a sluggish framework like Angular when writing those large apps, but that's not even the core selling point (as far as I'm concerned). But okay, part of the attraction of React could be summed up as "better performance than Angular on complex applications", and I guess a benchmark proving (or disproving) that would be nice to have.

That's not what we got. Instead, this guy tested the raw performance of some very simple code compared the sort of vanilla code you'd never ever ever write in the sort of app that React (or Angular) is actually designed to help with. So...yes, React is slower than vanilla JS at stuff that vanilla JS is faster at. Shocking.

I'm struggling to think of a less meaningful way to do the analysis.

Said every developer who's never had to "just speed up" an application written with a framework that makes development/debugging easy, but has fundamental performance issues that cannot be overcome without a significant overhaul.

Felt the same to me. But I did learn of shouldComponentUpdate while reading the comments here, which for the author's scenario would seem to reduce computation time when using React exponentially. It's nice that React gives you an "advanced" scenario if you so need it for your uber-update-all-the-elements website.

Here is a really good talk on React performance that may also be helpful - https://www.youtube.com/watch?v=KYzlpRvWZ6c

Shill, agreed. Surprised to see that this Google developer advocate leaves Angular out of the test and focuses solely on React.

Posting benchmarking without full source code is bad. And you should feel bad.

I remember seeing people bashing angular 1.2/3 for its speed and then looking at their source and they weren't using some of the most powerful features cough cough react conf (https://youtu.be/z5e7kWSHWTg?t=327) This ended up corrected at some point by some one who looked at the source code, forked it and made it perform on par with the react version.

I don't doubt the vanilla JS is faster (in this case) but really, you need to make your source available before posting to your blog. Benchmarking in a fair way is hard, and you are damaging the public perception of a (from what I hear) a great framework. Maybe this is justified, but at least give the fanboys a chance to call you out - maybe everyone will learn something new.

Frustratingly, we can't actually look at the React code he used, because it is Google proprietary code(?). But there are definitely slow and fast ways to take an array of elements and render it. Ideally, each element would be wrapped in a component with a "key" property to speed up diffing.

Also, the standard way to do infinite scrolling when you care about performance, especially on mobile, is to reuse a small number of elements, just enough to cover the screen and then some. You don't actually create 1500 elements.

Good point about reusing. Re keys, though, he flat out says he uses them at the bottom.

Obviously diffing is going to have some overhead.

But this overhead is minimal in most use-cases. The benchmark in this article is not a real use-case.

If you wanna show 1200 images in a web page, or 1200 elements of any kind in one web page, then you should only create DOM elements for those of them that should actually be visible by the user. You read the scroll position and calculate which ones would be visible in the viewport, create DOM elements for those, and disregard the rest.

In most real-world applications, this technique would suffice. Your DOM and vDOM would be small, and you'd only be diffing maybe 5-10 elements at a time.

Although, I can think of use-cases where this technique, and React's style of coding may not be sufficient. One example is iOS's Photos app. Sometimes it animates hundreds of elements at a time (where you're viewing photos by year or location). I guess diffing might not be a fast enough solution for this use-case.

Completely agree. I had to implement something like this when I worked at Stipple in the past, and the biggest problem became optimizing the DOM interactions (adding and removing images from a masonry feed as the user scrolled).

Vanilla JS or React, you can't have 1200 images sitting on a web page all at once without optimizing for what the user is actually looking at and interacting with.

My very React app initially took about 10 seconds to render changes on a iPhone 5s and a 1st gen Moto X. Also a user with a chromebook complained that it degraded other aps. I did some digging and found that many of the React developers use some form of immutability with the pureRender mixin. This is a huge boost to performance. Doing that alone made it slow but usable, and then I batched up changes to state, and split parts of the application into different tabs to improve things even more.

To this day, I still wouldn't be done with the application had I used vanilla JS, so even with the performance tuning it was worth it, but it was not without cost.

i've been doing react for almost two years (since before it was publically released) and i've never seen anything render that slowly, and i basically never use purerender

if you share that code with me i will find the actual problem for you, just because i'm curious. it isn't react.

For what it's worth, I also work on a fairly large React application and haven't run into organically strange performance dips. If we're designing a component to do interesting things with unbounded data sources, we absolutely consider the vanilla JS approaches before trying to model the problem in terms of composable React components. But compared to every other approach I've experienced building web applications, React does give us pretty ergonomic and efficient building blocks by default.

It wasn't just react, I was definitely doing unneeded state changes, and I got it to the point now where there is no noticeable delay.

One big issue was that I wanted dependent fields to update as-you-type, which also meant validating as-you-type. I added a timer to delay that so that this wasn't all running on every single key-stroke, but wouldn't update until you stopped typing.

Reducing the number of dependent values in the DOM tree at a single time helped a lot too, and there were logical categories to split them into. My validation code was not particularly optimized either, since it was originally designed for batch-processing.

I really don't get huge concerns people have with performance. I find chrome's profiling tools to be very primitive to what I'm used to, but they were more than adequate for me to steadily improve performance.

There's probably still some improvements to be had, but right now there's no issues with responsiveness on mobile, so I stopped looking.

I can totally empathize with the large form woes, having worked on a couple of similar client projects with similarly unusual requirements. Angular (the first version, I hear awesome things about the latest), in particular, fell down hard when managing a very large number of semi-dependent input fields. We ended up customizing large swaths of its collection diffing code to gain minor performance increases before designing a pure JS solution that was very performant, if very expensive and time-consuming to implement. I have no reason to think that React would have fared any better in either case.

Just curious, why wouldn't you be done with the application if you had gone the vanilla route? What was it that sped up the development process?

I was starting from a CRUD application with forms for inputting all the fields to store in a database, and then a PDF could be generated off of that. So, I already had code to calculate all of the dependent values from the inputs. Turning it into react took about one days work to get the poorly performing prototype, and fully switching to ImmutableJS took another few hours.

Also, I know approximately nothing about web browsers. It's possible that using something like bootstrap or angular would have given me a big boost here as well, but the article is comparing to vanilla JS, not to other frameworks.

The large React apps I've seen running on those types of devices have taken hundreds of milliseconds to render, not multiple seconds.

Doing a couple of passes with Chrome's profiler or React's profiling tools would give you a better idea of where your time is going.

What do we learn here? I don't see a big lesson here, this benchmark is comparing it to vanilla JS, there is a reason frameworks do exist.

I know this is obvious, but I'll say it anyway benchmark is a bit pointless against hypothetical alternative, it should be benchmarked against frameworks trying to solve same problem like WebComponents (Polymer), Angular or other frameworks.

To be fair, he was pretty clear about the reason for it:

I know that React’s performance has been compared to that of other frameworks, like Angular. What I wanted to do was my own test of it against plain old vanilla JavaScript…The docs claim that JavaScript is fast, and it’s meddling with the DOM that’s slow. So for that to be true we should be largely able to switch out React with something else and see pretty similar performance characteristics.

IOW, it's a common assumption that React has minimal overhead for DOM manipulation, and this benchmark suggests that it rapidly becomes performance constrained on relatively small DOM trees, especially on mobile.

I don't know if this is accurate or not (I suspect there's something a bit off, because the numbers don't look realistic to me) but it's worth looking at anyway!

To me only lesson seems to be, this is how React works. The diffing is known to work like this, if you are not calculating the diffs beforehand (or use some non-standard ways to set the state) there is not much one can do besides obvious things like immutables. Diffing massive tables against each other each time is sure slow.

Comparing it to vanilla JS is the pointless part to me, that's like comparing a appending a list, and diff algorithms together, with complicated way to say it.

The point of the benchmark is that it's obviously not the DOM manipulation that is slow but the JS part. So moving as much as possible away from DOM manipulation to JS heavy lifting can, in theory, result in performance loss. That's pretty much all the benchmark says (and seems to show). Now, the test case is some synthetic benchmark, but unless you take some time to program Facebook in React and in plain JS and do testing, you'll need to take some shortcuts.

Of course it is a lot of work to diff two DOM trees, but the general impression on React was that the DOM is soo unbelievably slow that it will still be much faster to do the diffing (which i never understood, technically, but what the heck, FB engineers are wizards!). At least, that was my impression i got from the React announcements. And i don't even code JS nowadays, so i couldn't care less if you prefer React or Ext or write everything as Silverlight plugin :P

It's a contrived example. The DOM manipulation is fast in this case because it's a simple appendChild every time. In other cases like where elements in the middle of a table are updated, you would get into a mess writing vanilla code, either complexity or performance wise, because you'd have to traverse the DOM to get to where you need to do updates and do each update individually. React batches such things together, and does one single update.

>In other cases like where elements in the middle of a table are updated, you would get into a mess writing vanilla code, either complexity or performance wise, because you'd have to traverse the DOM to get to where you need to do updates and do each update individually.

If you have a table and want to edit information this is trivial to do in a React like way without React. Edit button stores reference to the row, edit values, update data store, render out new row element, remove element, insert new element. Yes, React will generally be less code and the argument becomes whether this is more complex, which it is, but I'd say neither the complexity or performance truly suffer. I bet the performance will be faster as you removed diffing entirely and React will have to perform your logic anyway.

I think it is out of question that a framework shields you from some complexity. That's the purpose of it.

Nevertheless, if you ignore the code complexity, i am fairly certain you can write JS code to update something without needing to traverse the whole DOM each time. I don't program JS, but wouldn't you have some variable holding the reference to the place you want to update? Like you have a DOM element that will be updated every second, you would surely not search the whole DOM every second, but get it once and udpate it multiple times? No?

Without speaking for Paul, my understanding is that he is just appending to a list which in theory should be a simple diff right? Anyway, we are trying to get the source open asap, we just have a rather bizarre need to got through a release process.

Comparing to vanilla JS provides a good benchmark as to how much the cost is to use a particular library. To provide an example, the Angular team has claimed in talks a few months ago that Angular 2's performance is already close to pure JS for rendering large numbers of DOM elements (I hesitate to say anything about how complex the trees are since I don't remember the details) - profiling is built into the repository itself.

This would cast doubts on some of the theoretic claims on the React docs, at least for mobile.

I'd love to see the full source of the render function(s) here, we may not be seeing the full story... Are the React children having appropriate `key` attributes set as the list increases in size? etc

I spoke with Paul, he is using `key` and said he is following best practice as per the docs. That being said, we do want the code to be open, we just have a rather heavy OSS process that can take a while to get it out (it's a nut ache at the moment).

Is he using shouldComponentUpdate() [1] to avoid diffing existing photos when new ones are added?

[1] https://facebook.github.io/react/docs/advanced-performance.h...

This has been added to the end of the post:

> Did you set shouldComponentUpdate to false? Yeah that would definitely improve matters here, unless I need to update something like a “last updated” message on a per-photo basis. It seems to then become a case of planning your components for React, which is fair enough, but then I feel like that wouldn’t be any different if you were planning for Vanilla DOM updates, either. It’s the same thing: you always need to plan for your architecture.

The Vanilla DOM equivalent is just not writing any update code until you need it. I'd argue that adding...

  shouldComponentUpdate() { return false }
...is the equivalent React way of saying "this will never be updated", but the default behaviour is the opposite of Vanilla DOM because React automatically handles updates for you, regardless of when you realised you were going to need them.

The "last updated" scenario would be a much more interesting comparison with Vanilla DOM, as a React component's lifecycle gives you an obvious place to set up/tear down the necessary timeouts on an individual basis and updates happen only when they need to. I was pleasantly surprised by how trivial it was to make _every_ time/date displayed in my Hacker News clone live-update, even down to the per-second level: https://github.com/insin/react-hn/commit/08893a046b5289b07ef...

It seems like the end result of React's diffing algorithm is going to be an appendChild. Exactly what is happening in the vanilla JavaScript. But I don't think the diffing algorithm itself is the reason for the time disparity but rather React's rebuilding of the tree to diff against the DOM. I agree with you, shouldComponentUpdate() should massively speed up React.

Right, with lots of elements, React still needs to render and diff each component which for a lot of elements causes a lot of garbage and diffing work. ShouldComponentUpdate / PureRenderMixin avoids this altogether when props/state isn't changed so this gives a significant boost with lots of elements.

Because of the somewhat pathological benchmark design, he could probably implement a component for each picture and set shouldComponentUpdate to simply return false. With proper keys, I have a feeling the React performance would improve dramatically, perhaps as much or more as the pureRender mixin.

PureRenderMixin is basically shouldComponentUpdate with a shallow comparison of previous/next props and state, but the big win is avoiding render and diff for the component, so returning false would be about the same as PureRenderMixin.

Maybe I'm not following, but: if this was a contrived example -- i.e. written specifically for the purpose of testing an idea, and not a result of internal evaluation where there are proprietary bits -- then what does "your" (I don't know who you are) OSS process have to do with it?

Can't release code until it goes through open sourcing process. It is pain that's all. Code will be shared.

Presumably the process applies to releasing code written during work hours full stop.


I don't mean to call out the test as flawed, but I'm not sure something with infinite scrolling is the sort of application I would use React for. I use it a lot for Modals and Forms, but if you're only appending to the DOM and sometimes clearing it out (assuming you've functionality that 'Clears' the page once you have some images loaded) , it's a simple enough case for vanilla JS

In setState() the new photos are appended to the end of a list, and this list is re-checked in its entirety at every update. So, the React solution is actually at least O(N^2).

Something to worry about!

I'd love to see a little bit of ImmutableJS added to see how much of a speedup we have.

How does ImmutableJS work? If it was turned into a straight list of immutable records, each comparison would be quicker but the complexity would remain the same. You'd have to something like turn it into a tree to reduce the complexity.

ImmutableJS's lists are collections which are implemented behind the scenes as trees. When you append to an immutable list a new list is returned so the comparison in shouldComponentUpdate() becomes simply a reference comparison between the old and new list (return nextState.list !== this.state.list).

That doesn't help reduce the computational complexity in this instance. After appending, the lists are different, so it renders the component, which starts iterating the children and checking each one in turn to see if it's changed. To reduce the computational complexity you'd need to do something like walking the tree that backs the list instead, so you can ask it "did any of the children under this node change?" which means you can start skipping the comparisons for big chunks of the list. I don't know if react and immutable.js can be combined in this way without significantly rewriting you or components. I suspect not.

With ImmutableJS, the first N records won't be compared at all because they're the same. Thus, if the time is spent in comparing the change in states as you implied, the comparison will be much quicker, and React will essentially not care about the already existing photos (as in, will not spend time there) but just create new ones.

As someone else pointed out, the render function is another interesting part where time can most likely be wasted. Unfortunately there's no way for us to know more.

It still needs to compare them, it will just do a reference comparison rather than a deep comparison on the fields. So again, it will be (a lot) quicker, but the complexity is still the same.

You're right. And there's no way around that other than not actually rendering more items than what are visible on the screen. Which is darn difficult on the web due to lack of APIs..

Yeah, but if you chunk the items (groups of 10, lets say) in components, you can return false from shouldComponentUpdate in all but the last component, and therefore be fast.

Yup, there is a bunch of things you can do to reduce the constants, but even with chunking the computational complexity is the same.

In this case, it's not hard to reduce the constants to a point where the complexity doesn't really matter though.

Is the complexity actually quadratic, like amelius said?

The diffing operation is linear according to React documentation: https://facebook.github.io/react/docs/reconciliation.html

Is Array.prototype.concat linear, too? What about the ImmutableJS equivalent? I don't know a whole lot about algorithms, my intuition is that there would be a way to add an item to the end of the array without traversing the entire thing, but I guess it all depends on how the data structure is implemented.

I've been pretty vague which probably isn't very helpful so I'll try to be a bit more precise: for a mutable implementation it's quadratic in the number of times render is called over the lifetime of the component; for an immutable implementation it's quadratic in the number of comparisons (inside shouldComponentUpdate) over the same lifetime. You can generalize both operations to "process component when building the component tree".

Each time you add an item to a list of size n, it needs to process the previous n items. This forms the series 1+2+...+(n-1)+n, which is (n^2+n)/2.

Obviously a reference comparison is a lot quicker than a render, so you get big speed wins using immutablejs that are definitely worth it, but it's still O(n^2) components processed.

A component being processed via returning false from shouldComponentUpdate is very very different from a component being processed by running the render method and then doing a diff, so that is not a good model.

No it isn't, I'm talking about avoiding render computation, and then therefore the diffing. If 1 render method gets called that contains 10 elements, and 99 other render methods are not called because of returning false from shouldComponentUpdate, that is very different than 1 render method getting called with 1000 elements, and it is not just in the constants.

I agree that he should have waited until the code was open source to post the benchmarks. It's not cool to post a "negative" benchmark without giving the framework's authors a chance to correct any misuse (and it sounds like there may well be some here).

It's only a negative benchmark if you chose to perceive it that way. I think it's a valid data point, it doesn't say anything (that I perceive) negative...unless you're really worried about javascript performance of the framework versus relative to straight javascript.

Where is source code ? I need proof that React has performance problems., I use React.JS for large application with huge logic in one page and all works pretty fine on mobile devices and desktop ...

I only see part of the source code, but what is there has two pretty big performance blunders (no PureRenderMixin, mutable state).

PureRenderMixin and mutable state are still pretty common in React applications, I think it's realistic not to include them.

The article mentions the source code.

my god, a library written in javascript is slower than javascript itself? the library doesn't have negative execution time?

PureRenderMixin would defenitely help here, even to the point where it's comparable with the js solution.

Even though keys help here, render is still called for every image every time and then diffed.

He'd have to extract the image divs to a component and put PureRenderMixin on it so that the component's render is skipped altogether.

I only had a quick look at this but it might be the wrong approach. It looks like every time a new photo is concat onto the data array, every single node will be re-rendered, not just the new ones. This won't happen with the raw DOM one because of the "if (!imageInDOM)" line.

A better way to do this could be with child components with their own unique key, but since we can't see the render method I'm not sure exactly what's going on.

nb: I'm not saying React is definitely "fast" here, just pointing out a potential flaw. I'm also not saying I'm definitely right ;)

His Vanilla examples is exactly what Ember.js does because it already knows which data changed and to which node it was linked to so it will diff the vDOM only for those.

I would love to see the same comparison with Ember.js in the mix.

Given that Chrome has profiling built right in, shouldn't it be basically no effort to conclusively find out where time is being spent?

Sure, but I suspect it's a question of whose role it is to do that. We (or at least Paul) could go back and see if we can submit issues or even fixes to the product. So someone could do it, just who cares enough I suppose.

it sort of remains amazing that you are convinced that this casual benchmark which has already had two severe problems exposed and doesn't show its approach is not the problem, but rather that it's this library that is in heavy use and that everyone else seems to not have these problems with

Well, I think the author's point is that the conventional hive mind wisdom in the community that uses this product has been that DOM manipulation causes performance problems and the framework is wicked fast and not a source of performance issues... which may lead naive or new developers to never both to consider it as a potential source of problems. This very clearly illustrates it may be possible that you shouldn't always disregard the framework as a potential source of problems...

I've been using a homerbrew "Framework" or sleuth of classes & singletons with handle bar templates and jquery. It performs fine for a desktop app, i re-render things as it changes. But React & flux would be a way nicer way to go about organizing the code and building it. Plus all the stuff i'm re-rendering, the diffing react would give me would make that more performant.

I think though examples like this are good to show you shouldn't always use a framework.

Furthermore with mobile, i've used ES6 classes, handlebars for templates, jquery to drop the html in, and do event bindings. It runs decently well in a cordova app, but i do find somethings lag. I've done a bit with react in cordova as well and found similar problems. Getting a speedy mobile app built with html5 is hard. I really think for a proper experience native is the way to go. I've been liking react native for a side project of mine, but i'm starting to consider using Swift instead.

Interesting to see how defensive people are on here about their favorite new toy. I didn't even find the article that harsh actually. Of course vanilla JS will outperform any toolkit. If you like React, use it for that reason, not cause you thought it was magic...

Adding 1,200 image components to a web app and never cleaning up the ones that aren't visible is a poor approach to building a mobile web application.

It doesn't matter what technology you use, it's a bad idea.

So he's basically tested the performance of a badly architected application.

Kind of like how React benchmarks like to create tables with hundreds of rows and dozens of columns to test against other frameworks?

You mean a grid? A component one is often asked to add to web applications for businesses. I've added 4 in the last week alone to apps I've worked on. It's bread and butter stuff.

You shouldn't be making large grids for the web. Terrible practice. You should paginate.

I'm not sure why it's terrible practice, the only reason I can think of is a performance one. As I said earlier in the thread though that's only a problem if you create infinite lists and never clean up after yourself.

The correct way to design such components would be to only create DOM for elements in view. Which is why this test is flawed.

I once had an interview with a YC company where I didn't believe React was necessarily the right fit for applications with simple DOM manipulation, like one I had worked on. I pointed out that you can outperform it when you know exactly what needs to happen to the DOM, like in this example, continually appending new elements. React overall is great, but people need to realize that there are faster React like frameworks and sometimes it isn't faster; however, you normally get simpler code to reason about and less code to maintain.

It's always a trade-of between speed of development/maintenance vs performance. I think React is the best good enough compromise there is. It's really fast and a joy to work with. But obviously, in certain very specific cases, vanilla javascript where you have 100% control over the javascript/DOM will be faster, there's no arguing about that.

Some of the measurements here are bogus.

Consider the chart under "Mobile: Vanilla" heading. It seems to indicate that the total rendering time _drops_ with the number of pictures. That is _almost_ physically impossible. But it is super easy to get that kind of result in benchmarks, e.g. by not properly clearing caches between runs.

Don't make assumptions.

Here's a possible explanation (but who knows):

At a certain page height, WebKit/Blink may change rendering behavior. It doesn't need to render elements that are far away from the viewport.

You did not think this through, did you. His chart shows that rendering 1000 photos is _faster_ (>3X) than rendering 200 photos. Think about it. How long it took to render photos 200-1000 (in the 1000 run) ? Negative time? Please don't tell me that V8 is so awesome that it can run JS in negative time.

Easy there. You've misunderstood the benchmark. It shows the time to add 100 images, given the existing number of images. So adding the 900th-1000th image was ~3x faster than adding the 100th-200th.

I realized that, and looked over the thing again, but the measurement still does not add up - even the author calls it "bamboozle" and says "I have no idea what’s happened at the end.".

So if the benchmark produces weird results, and is published without source code, so that the results cannot be reproduced, why would anyone trust it?

Even the author suspects the V8 optimized away the vanilla code... and if that is what happened, then apples are being compared to oranges and the whole conclusion is bogus. Which was kind of my point.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact