Hacker News new | comments | show | ask | jobs | submit login
Is ReactJS really fast? (500tech.com)
409 points by Murkin 692 days ago | hide | past | web | 186 comments | favorite

React.js is actually just really pleasant to work in and easy to reason about, and the virtual DOM is what makes that all possible without it becoming unacceptably slow. DOM diffing isn't there to make React faster than everything else ever imagined. It's there to let you stop thinking about the DOM and focus on the world state of your frontend instead.

I wasn't truly interested in React until I read this, which does a better job of spelling out React's real advantages than I ever could: http://jlongster.com/Removing-User-Interface-Complexity,-or-...

Here's a choice quote:

"Rerendering everything (and only applying it to the DOM when something actually changed) vastly simplifies the architecture of our app. Observables+DOM elements is a leaky abstraction, and as a user I shouldn't need an intimate knowledge of how the UI is kept in sync with my data. This architecture opens up lots of various ways to optimize the rendering, but it's all completely transparent to the user."

> Observables+DOM elements is a leaky abstraction

The DOM itself is also a leaky abstraction. Render cycles are so prohibitively slow that we've started maintaining a parallel DOM and implementing diffing algorithms in JavaScript. As brilliant as that may be, it's also crazy that it's come to that.

This is a problem that should be solved in the browser. HTML5 should add a simple API to transactionally update the DOM and only render after all changes are committed. This would prevent every single framework from having to implement this logic.

The DOM _is_ only rendered after your JavaScript is finished running - as long as you don't ask for properties that can only be known By rendering it out.

It's not "crazy", it's basically just double buffering, a technique that has been around for a very long time.

Seeing as how ES6/7, HTML5 and CSS3 specs haven't been afraid to incorporate ideas that started in external libraries, I do hope we see this baked directly into the browser at some point.

For what it's worth, transactional DOM update is a fantastic idea that I hadn't heard before your comment. Are you familiar with anyone actively exploring such a thing?

I remember React devs saying they always remember about that, so the whole library is designed to be easily detachable from the DOM, or whatever renderer you might be using. They're also kind of proving that now with ReactNative, and knowing what's coming in the next React versions it's gonna be even better.

Nice thing about React is that even after you remove the Virtual DOM it still encourages a really good model of programming and makes it effortless to build your app.

The little time I've spent learning Mithril [1] makes me think it's a nice middle ground between speed and pleasurable coding experience.

Their benchmarks have it at 8x faster at rendering (uncompiled) and 28x faster to load (although Benchmarks Lie (TM)).

It feels like writing vanilla javascript for the most part, which is delightful and exciting. The Views part however, which is most comparable with React since React only deals with views, is extremely reminiscent of React, and you can even write MSX, which is just basically JSX with some subtle differences.

[1] http://mithril.js.org/

EDIT: Link

"It feels like writing vanilla javascript for the most part, which is delightful and exciting." - this sentence makes me shiver.

That being said, this looks interesting, thanks for bringing it to attention!

ahaha I understand why it would. I meant that the framework is extremely unobtrusive. All you do is create the beautiful expressive javascript objects which act as your models and controls, and then you just mount it with a mithril command, and that's it!

Mithril is awesome (been using it for 7 months now). I really like that they finally have implemented components and described how they should be made.

Whats the situation about components now? Last time I checked this a while ago React would encourage having lots of components with internal state and calling setState on a subcomponent would only update the DOM on that subtree. On the other hand, most other virtualdom frameworks they would encourage you to keep a single global model object to describe the whole page.

The leaky abstraction thing is at the core of everything that is wrong the the AngularJS model. I still like a number of things about AngularJS, but the data-binding model was really poorly thought out and implemented.

As far as I can tell this has been recognized even by the core team and is a major reason for the sizeable changes in 2.0. It surprises me how many people in the community continue to defend it as "OK" when even the AngularJS team has arguably admitted it's mistakes.

There's an interesting video by Netflix where they discuss using React in their stack. To increase performance on many of Netflix's TV or Console based UI's they have their own rendering engine they're using instead of the manufacturer's browser engine. This means any of React's "speed" improvements are a moot point so instead they use it because of its simplicity for UI layouts.


Along that line of thinking, Flipboard created React Canvas, which renders directly into a <canvas> tag, bypassing a lot of the DOM:


I just have to add that this is how pages were generated server-side before XmlHttpRequest emerged. This was deliberate. Addressable and re-loadable states were part of the original design by Tim Berners-Lee. The fact that we have had this whole circus with mutable state on the client side is just a joke to me. I hope that someone invents a sane client-side lib with sane page generation quite soon. Should certainly be possible.

Oh and not even performance-critical stuff like video games deal with the big mess that is mutable state. If game devs redraw their stuff at 100+ FPS and haven't needed mutable state, how come that web devs fall right into that trap? Crazy.

> If game devs redraw their stuff at 100+ FPS and haven't needed mutable state

Well I wouldn't quite say that... although there was a time when "dirty rectangles" was an important feature of a graphics engine. Generally speaking, games are based on mutation of entities: to move an existing entity, we mutate their position, not recreate them in the new position.

Generally speaking, games are based on mutation of entities: to move an existing entity, we mutate their position, not recreate them in the new position.

Actually, games are generally an area where immutability is relatively easy to use. All of the mutation code can be isolated in one place that handles the transition to the next "tick", leaving nothing but immutables in the game logic itself. This is analogous to how ReactJS manages things through diffing.

Conceptually it's very easy:

World(t+1) = Sim(World(t))

but performance in practice is going to suffer a lot compared to in-place mutation, for any simulation that has a great degree of temporal coherence. I don't know of real world examples of game developed with immutable / reactive patterns, outside of thought experiments like Carmack's (Or Tim Sweeney's), or a few simple games in Haskell, Elm, etc. But I'd love to hear of more examples.

(Edit: some parts of games and game engines use this sort of "double buffer" approach for other purposes: smoothing and interpolation mostly, of visual frames in between logic ticks, or for network prediction. But in the cases I have seen or coded, most of the world state is not interpolated and not duplicated)

I am not yet a professional game developer, but I did write a multiplayer server in Clojure. It had most of the features of the current version, which was ported to golang:


(Be sure to mouse click on the map, otherwise your arrow key keystrokes may never get read due to focus.)

I got pretty similar performance out of Clojure's persistent collections as I got out of mutable state in a traditional game loop in golang. (Though to be honest, in neither case was that enough performance. 150 to 250 concurrent users, potentially all in the same location, interacting.)

Sorry I should have said view state. Dirty rectangles are indeed such an optimization, you are right.

I'm no AAA game dev but I'm pretty sure games are all about mutable state. I'd love to see some examples of what you're talking about.

Sorry I should have said view state.

The DOM is not a view. In fact it stands for Document Object Model :P I know it's confusing since it's where you create the view for your app, but the DOM is actually a model of your view (i.e. the V on your app's MVC is built manipulating the M on the browser's MVC).

Why can't the web draw at 100 FPS from immutable data? Because the web renders through constraints based on the DOM, and constraints cascade, which is a problem more akin to physics in games (which are mutable for a reason) than their graphics. The bottleneck is updating such model.

Also, game physics constraints are usually faster to calculate because action at a distance is unusual and there are optimizations like quad-trees. In web pages, inserting a single DOM node can trigger a huge change, making it more like simulating hydrodynamics that the common solids found in videogames.

You bring up some interesting points. I guess you could compare the DOM to the OpenGL/DirectX scene, with some differences. I'm not entirely convinced that the graphics rendering is less complex though, considering occlusion lightning etc. My main point was really yhat re-building the entire scene is entirely possible and a lot easier to code than manipulating a stateful scene.

I guess physics is stateful since the programmer only sets initial conditions and the engine moves it forward, so the final state is unknown to the programmer.

Dom is a fine tool but I would like a better separation between the model and the view, which re-rendering gives you in a straightforward way. The other way is data binding, but it's considerably more involved and probably not worth it for 99% of the cases IMO.

> I hope that someone invents a sane client-side lib with sane page generation quite soon. Should certainly be possible.

No good libraries that I know of yet but people are already doing it, at least I am: https://groups.google.com/d/msg/clojurescript/T6no_srtBzc/8o...

> I hope that someone invents a sane client-side lib with sane page generation quite soon.

Can you explain more fully what you mean by "sane page generation" please? I am not getting your drift, apart from wondering if you want a return to 1990's static HTML pages...

Well... I'm not expecting to do both, and get 100fps on the client while also doing all the processing serverside.

The author however has focused specifically on the often read speed advantages. They use it themselves and he apparently likes it.

I guess the point I'm getting at is that the virtual DOM's speed is really just the answer to the question "But isn't re-rendering the whole world over and over really slow?" Which is the first thing anyone deeply familiar with web UIs would ask.

It's the solution to a problem that arises when you move to this programming model. It's the programming model that's the real advantage. The virtual DOM's speed is worth mentioning because it remedies what would otherwise make React completely impractical in the real world.

Virtual DOM makes rendering fast, but it has extra costs in eventing and in javascript timers. Virtual DOM is not free IMHO.

Some quantitative results - http://blog.nparashuram.com/2015/03/performance-comparison-o...

Ugh, React people... Please stop saying "Reason about". Did you ever use that phrase even once in your life before you started using React? Just say that the code is more understandable or whatever.

I agree completely - I've been recently using React to build a static website, and it's a joy to use over standard HTML, or something rendered server side. Being able to split apart the different parts of your page into small components makes it easy to reason about without needing to take an entire page in.

The fact that you can then transfer that to a dynamic website with reasonably good performance is huge IMO.

> DOM diffing isn't there to make React faster than everything else ever imagined.

This is not how React has been sold.

I've seen several React talks given by Facebook people and the pitch they make is that React makes things simpler to reason about with acceptable out of the box performance (and levers to make it faster as needed). There may be some people pitching the speed angle, but I don't think it's the Facebook people.

I disagree. The main message I got was never "React faster than everything" it was rather "React makes complete DOM re-renders fast".

I've been following the rise of React fairly closely and the speed thing has never felt oversold to me. It was more: "Faster than angular on complex pages unless you jump through lots of hoops" and that still seems fairly reasonable.

This is the ONLY way I hear about it being sold.

I completely agree. Trying out react and learning how to structure things with it. It all just felt natural.

> pleasant to work with

if you're a startup or small shop.

if you have to deal with any native code that expects to know something about the dom, now you just forced that code to do expensive and constant polling algos because you completely pulled the dom from under its feet.

> small shop

Like Facebook? Netflix? The BBC?

netflix = have their own renderer. that is lot of hours of good engineers dedicated to that.

facebook = probably hack away all the low level stuff they need to tweak.

bbc = they probably have the same problem we have here. that does not stop them or we to boast that we use react all over the place. doesn't mean we like it.

He forgot to add Airbnb to that list.

React isn't polling, it is callbacks and recursive functions.

i'm not a web dev, so i could be missing something obvious, but i can't think of any use cases for inspecting the dom from native code. example?

The comment makes some sense if you replace "native" with "plain" or "vanilla" JavaScript.

yep. like 99.9% of the code ad networks expect. they want to check DOM state to make sure you are not a sleazy publisher hiding the ads under the content or if you are not being shown in an iframe on some porn site.

with react, they will get inconclusive results because their code might run while they are virtualized or god forbid during the render. and you will only get garbage ads because now they consider you a garbage publisher.

Write your wrapper for ads as a component that renders a div... bind your ad-logic as part of the componentDidMount() event handler. The results should be consistent.

oh! yeah, that does make more sense; i was thinking of something like a qt webview.

I wouldn't say React isn't fast, but the OP is right, the claim that React is faster than the alternatives is basically just hype (or good marketing).

Going back years to React's inception and public launch, everyone's concern about virtual DOM diffing was that it would be impractically slower than fine-grained dependency tracking. Note that React's own home page expects people may be concerned about performance, and rightly does not respond with sweeping statements promising better-than-Brand-X performance:

> One of the first questions people ask when considering React for a project is whether their application will be as fast and responsive as an equivalent non-React version. The idea of re-rendering an entire subtree of components in response to every state change makes people wonder whether this process negatively impacts performance. React uses several clever techniques to minimize the number of costly DOM operations required to update the UI.

The React team and early proponents got out ahead of this concern, pushing the message that React is fast. It's faster than you'd think, they said, and in some cases faster than the alternatives. Their arguments seem to have gotten distorted and simplified by those repeating them.

The original arguments for why React was fast were:

* The DOM is very slow compared to pure JS. To the extent React's diffing is saving you from re-rendering DOM, you are winning. (Compare it to re-rendering a Backbone template, for example.)

* While it's true that other libraries have schemes that track fine-grained data dependencies, allowing them to go straight to the nodes that need re-rendering without doing any tree diffing, these schemes have their own overhead, which could in theory be just as high as React's.

When I pressed a couple React devs on the second point, it was clear they couldn't argue that tracking data dependencies would be worse than diffing necessarily or in general. Rather, they had concluded the two approaches were comparable in practice.

In summary, the DOM is pretty slow, and virtual DOM diffing is pretty fast. It means you don't have to track data changes at a fine-grained level, and you never over-render the DOM. Performance is comparable to other leading frameworks.

I don't use React for it's speed, I'm sure I can make anything using vanilla or ember, or whatever just as fast. It's the way it makes you think about things...the developers on my team are now much more productive. They can jump onto something another developer has been working on and see what each component is doing. It's the declarative nature of React and the way it makes developers think about how they compose their components that makes it a great framework. And let's not forget the reason behind a framework...for developer efficiency. There are no frameworks that are the panacea to everything either, which is probably why the framework arguments break out every single day in the JS world...people think the framework they choose is the one true solution, when in fact, there are many ways to build a building.

I was swept away by the React hype a bit. I tried to sell my team on React by stating many of the same points and eventually we decided to stick with our current JS framework: ExtJS. And I'm thankful I was overruled a bit.

For our team and the types of applications we're building ExtJS simply makes much more sense. I've used React now for a simple web app and also a Chrome extension. For certain UIs scenarios like Facebook's ad example, the React method of rerendering everything definitely makes things easier. But in most applications, I think such a complex UI is usually rare and ~80% of the UI screens are usually fairly simple. For the majority of UI screens, React or perhaps React+Flux complicates things by adding unnecessary boilerplate even when using third party Flux implementations.

We're in the process of migrating from ExtJS to React, and it is, to put it simply, awesome. Ext was great when we started because it gave us all these great out-of-the-box components to just mix and match and shove things together and get things out the door.

But relatively quickly (i.e. as soon as we wanted to make something look like it wasn't Ext, and you can always easily tell an Ext app) we ran into Ext's inflexibility. A huge percentage of our code now is finagling Ext over an entire file of code to do what would be a line and some css in any other framework.

I have had the exact opposite experience with "boilerplate", as I see Ext needing much more of it than React.

As a caveat; we are building extremely complex enterprise-level software, but even for the basic stuff if you want to do what Ext wants you to do you are golden. If you want anything a pixel different? Good luck.

Is your main complaint over theming ExtJS? We're also building enterprisey software that's for the most part not consumer facing. Our clients are much more concerned about functionality. So while we have several custom themes for our products, we haven't drastically altered the base themes.

Regarding boilerplate, I'll give you a simple example. In ExtJS 4, it's one line of code to wire up an event handler in your controller to a view component like a "Save" button. Then in your onSaveButtonClicked event handler in your controller, you typically you write something like:

    myView.setLoading(true) // to mask the view
    myModel.save() // to execute an Ajax req,
        on success,
In React+Flux, clicking the Save button calls an action creator. The action creator first fires an event "loading: true" before it does anything. A store which is bound to that load action then calls a method which dispatches another event. The view which is listening on the store is notified that "something changed" and redraws itself (to show the loading mask/spinner). All this and we haven't even begun loading any data yet. Repeat all steps once the data is loaded or if an error occurs.

I'm not against React at all - just for our purposes ExtJS seems like a better fit. To be fair, I did spent some time building a custom URL router which really simplifies everything. All of our controllers are consistent with start() and stop() methods, they can define data dependencies (i.e. this data needs to be loaded before start is called), etc.

It's not the over-theming (although that is part of what creates that easily-recognizable Ext-ness), it's the strictness with which they expect you to use their components. If you want a dropdown that functions exactly the way Ext made a dropdown it is the easiest thing in the world, but if you want a slightly different behavior then it's a whole rigamarole of events and overrides and stuff. It's not just how it looks but actual functionality that is hard to improve. You'd be frightened if you saw our "ExtOverride.js" file. :)

I should have also caveat-ed that we are on ExtJS 3, so ymmv with 4.

The way you've concisely written the Ext code and long-form written out the React code does show some form of bias, as what you're really doing with React is pretty much the same as how you've written the Ext code, but if you write out the full path of any kind of UI update it will seem more complex. I mean, adding "the store is notified that something changed and redraws itself to show the loading mask/spinner" is what, 2 lines of code? But a long-form explanation makes it seem like a bigger deal.

Your Ext code is missing all the logic to actually set up the Ajax stuff, all the event handling that you call out in React, error handling, etc... If you came into a React system with all the same things set up that you're assuming in your Ext system (data bindings, event handling, visual components) then the code to accomplish the same thing looks almost identical.

As I said, Ext is great if you want to do what Ext wants you to do. It magics away a lot of stuff that you have to call out explicitly with React+Flux. But the second something goes wrong or you want to try something else that magic bites you in the ass.

All I can say is that my experience switching from Ext to React has been one of massive amounts of time wasted figuring out the quirk of event flows and component layout hierarchies to front-end code that just makes sense and does what you'd expect and is ridiculously simple to debug.

To be fair, I think that ExtJs 4 offered a number of improvements over 3--but also introduced a lot of backwards incompatibility...

@mejari - the reason why I wrote out the React code long-form is because I couldn't figure out how to write it concisely :) Not because I'm biased hehe.

I disagree it's not identical to React+Flux at all. There's more pieces and wiring required for the typical "Save" button example. In react+flux, the views need to listen on stores:

The views manually call action creators:

Action creators dispatch separate events for beforeRequest, onRequest, onError events:


    // do the save
    this.dispatch({ type: 'SUCCESS', data: myData });
Stores need to listen for action creator events:

    this.bindAction(actions.onBeforeSaveSomething, this.onBeforeSaveSomething);
    this.bindAction(actions.onSave, this.onSave);
    this.bindAction(actions.onError, this.onError);
In the Flux flavor I'm using (alt), stores automatically dispatch events when their state is changed, but I found managing the store state is annoying because of the loading, error flags[1].

So the store.onSave might look like this:

    this.loading = false;
    this.error = null;
    this.data = data;
Finally, the view updates it's state in response to a store change which automatically calls render.

Then in the view render, you can show your loading/saving mask.

[1] https://news.ycombinator.com/item?id=9315503

I guess it just comes down to comparing two different things. Yes, you have to do more setup with React because React isn't what Ext is. But if you set up your React to the point that Ext is at, with data binding and error handling and event listening that Ext magics away you get code that is very similar. Almost all of your code is doing what Ext magically does, but in a React application of any size these things are handled via components and mixins and such and you don't have to deal with them in the clunky way you're describing.

Not to keep harping on this issue because I know we're way off topic, but I'm genuinely curious how the React code can be simplified because this was one of my main pain points using React. Reflux and alt were major improvements over the Facebook flux impl, but they still require the boilerplate I posted above. If you remove that then you have "Flux magic" :)

There's no Ext magic in the code I posted. Flux and MVC are different patterns. In MVC, the controller typically has direct access to the view and model which is why the code is simple:

That would look the same in Java Swing for example. Flux is a fundamentally different pattern and one that I really haven't seen the need for in the products we're building to justify it's disadvantages. But for some applications, it's probably the right solution.

And the reason you couldn't setup your stores in a similar fashion?

Where the handling is in your action handler for whatever event was raised... This assumes you do your backend data access in the store itself... there are other options.

Sorry for the confusion: setLoading (poorly named) is a method on a view component in ExtJS 4 which displays a modal message mask.

No confusion.. you can have a property on your store that does the same to state, and triggers an event to draw a mask/modal in a similar way... there's nothing preventing you from doing that... it isn't so much in the box, but you can do it pretty easily.

You also aren't stuck building class based object constructors in JS as extjs projects tend to do.. or trying to shim out areas of extjs in order to extend a base rendering.

I also had issue with the amount of setup with React, which is part of what drove me to Mithril, which has almost zero boilerplate. Nearly every line of code you write is either relevant application logic or declarative view code.

I've also been thinking about migrating away from Ext--on the one hand there are theming issues--on the plus side, I rather like their grid widget. But, their licensing model just seems to keep changing and that worries me enough to want to move away. For example, now it's just not possible to buy a single developer seat.

I would love there to be an ExtJS alternative - on ReactJS, EmberJS or something else - with all those widgets. Heck, a jQuery based library of widgets that are as nice as ExtJS would be beuuuuuuutiful.

This is a great point, ReactJS has many benifits. People should choose it based on those merits not "speed".

If you really buy in to react for the benefits of easier to reason about code then you should be using clojurescript on top of it.

Pet peeve: Can we please stop blindly abusing `track by $index` without understanding it?

The thing with `track by $index` that nobody talks about is that, like many of the workarounds in Angular, it's a footgun in disguise. Consider this: http://plnkr.co/edit/qKm7fYZFCkXHI5pkPMYL?p=preview and focus on the first input. Notice that the focus stays on the first input, instead of sticking with the value as it jumps around, so if you start typing, you'll acccidentally modify another item as well. Oops.

If you don't use `track by` (or, if you use keys as they recommend you do if this was React/Mithril/some other vdom library), you'll see that the focus stays in the input that corresponds to the value you originally clicked on, which is the correct behavior.

While this may seem like a contrived example, the underlying problem is that it silently messes up state synchronization between your data and the DOM. This can become a nightmare because anything ranging from jquery plugins and directives to mundane things like inputs (as is shown in the plunkr) or links with onmouseup handlers or filters become potential minefields where you are eventually forced to choose between decent performance and correctness (after the hours it took you to finally understand the problem).

This is absolutely a contrived example. $index is the equivalent to `var ctrl = angular.controller`, something that serves very well to communicate basic concepts but that you would never use in a real-world application. If you're dealing with server side data, your model should have a real unique key/id, and that is what you track by.

The amount of stuff in AngularJS that "serves very well to communicate basic concepts but that you would never use in a real-world application" is a huge problem with the framework. It's a perfect storm of a documentation site and blogosphere full of misleading and/or contradictory advice and examples, combined with conceptual and implementation complexity that makes it incredibly hard to just read the source and deduce best practices from first principles.

This article contains a good example of the phenomenon: it suggests switching from `$timeout` to `setTimeout` with `$digest` to "give both frameworks the same information". Is that a silver bullet solution, or does it come with trade-offs, and if it does, what are they? I'm not sure, but my gut says that I shouldn't change this pattern in my code everywhere without doing a bunch of research to understand exactly what is going on. That's fine, researching how our tools work is a big part of the job, but I feel like Angular has a disproportionate amount of this sort of complexity compared with other tools I've used.

> The amount of stuff in AngularJS that "serves very well to communicate basic concepts but that you would never use in a real-world application" is a huge problem with the framework.

Come to think of it, this describes a lot of PHP example code out there too.

The basic premise seems to be that AngularJS can be just as performant as ReactJS if you do your homework and avoid common pitfalls of AngularJS.

I would argue that the beauty of ReactJS is that it doesn't have any gotchas. It's performant without needing a deep knowledge of the framework.

> The basic premise seems to be that AngularJS can be just as performant as ReactJS if you do your homework and avoid common pitfalls of AngularJS.

While AngularJS(1.x) is a bit faster now, your comment is a bit like saying Ruby can be equally fast to Java if one does it homework.

AngularJS has architectural problems that can only be reduced if one doesn't use much of angular features(scopes,watches) inside directives(which means writing components in pure js). So it takes a huge effort to make angular fast in general. That's the reason why they are creating an entirely new framework with version 2.x (which I believe is a mistake, Angular 1.x despite its flaws was pragmatic, 2.x isn't).

React is faster for a few reasons: there are no templates in react, everything is js code React does all the heavy computation outside the DOM so manipulations are minimal and there is no two-way data binding by default, the data flow is unidirectional.

Obviously the angular team wants the same stuff since it has proven it is a better architecture.

React gets faster if you use it's features. Angular gets faster if you don't.

Given the above discussion about the great performance differences largely being rectified by using angular's `track-by`, I think we can rule out templates as a major driver of performance grief. Templates are usually compiled into native javascript code once (or at least, can and should be), so I really don't see how there should be grand differences between what React is doing and how template processors work.

This, by the way, is an EXCELLENT example of the kind of arguments that have come out of the React.js camp that give me a lot of pause. There's radical differences made in how frontend applications are developed with React, and the primary selling point that I kept hearing was "embrace it, the performance difference is HUGE!". Then as arguments come out about how the performance differences have caveats, the arguments switch to "...well this is how we should be developing, for X, Y, and Z architectural reasons". I'm not saying that X, Y, and Z aren't valid discussion points, but it's been wrapped up in so much pseudo-technical FUD, and that's pretty unfortunate.

Signed, Recovering Template-Aholic

> Templates are usually compiled into native javascript code once

no they aren't.

> (or at least, can and should be)

not by default. React doesn't have templates at all ,so case closed.

Okay, but react has a bunch of performance and design pitfalls that one could fall into just as easily as not compiling templates. In fact, probably easier to fall into, because compiling templates is a best practice thats built into scaffolding libraries and discussed all over the place.

> no they aren't. says who?

says the angular team that wants to copy how react works and ditch old angular 1.x templates. Why do you think they are creating an entire new framework? because the old version was good enough? no. 2.x means the whole 1.x was an architecture mistake.

> AngularJS has architectural problems that can only be reduced if one doesn't use much of angular features(scopes,watches) inside directives(which means writing components in pure js).

That's an interesting piece of advice there. Could you go more into this?

We've found this out at one of my contracts. We've moved off of the typical $watch/$apply model for our directives and replaced it with a simple observable solution like Scheming (https://github.com/autoric/scheming). Our components are always watching attributes on our Scheming models, which are outside the digest cycle. It speeds things up, and is much nicer to work with IMO.

React just passes the gotchas to you in architecture - if you make a mistake, it increases the likelihood of the pain down the road. It also is more complex in required build tooling.

That is not to say that it is a good or bad thing - each person's/company's needs are different. I use both React and Angular - I am a lot more performant developing with Angular due to exposure to it the past 2 1/2 years, but I like both libraries.

Hopefully that means the gotchas will be more visible to everyone, but obviously that's dependent on the direction of the architecture.

ReactJS is also a UI framework. AngularJS is a giant beast that literally does everything you need. They are not comparable, with the complexities of things like DI and stuff of course one of easier.

> I would argue that the beauty of ReactJS is that it doesn't have any gotchas. It's performant without needing a deep knowledge of the framework.

shouldComponentUpdate is just as "deep" as this Angular optimization.

Just use PureRenderMixin for all components and you never have to implement shouldComponentUpdate.

shouldComponentUpdate is generalizable across any type of component. The angular thing looked like something array specific.

The most important thing about React's update performance is that it's ~O(n) of the size of the virtual dom. If React's performance is a problem you can window the input data, do things more efficiently in your render, tweak shouldComponentUpdate. The goal isn't to be as fast as possible but rather to be fast enough and for non-mobile you can hit that target with only the occasional tweak. For mobile, you can get as exciting as you like[1] but I've never had to go beyond the above steps.

[1] http://engineering.flipboard.com/2015/02/mobile-web/

You can build things that are faster than React in benchmarks. Pretty much every vdom based library/framework/language is faster than React in benchmarks and absolute performance has never been a stated goal of the project. The framework does help you out in ways that don't show up on benchmarks like this. An example would be batching DOM updates, which is much harder when you have components mucking around directly with their internal set of nodes.

I also favor using an immutable data tree with a corresponding shouldComponentUpdate and only rendering on rAF but those force architectural decisions on the project.

We need to stop this cargo cult stuff.

"Speed" as determined by rendering stuff to a page is actually something we can determine. Is it not possible to simply trace the framework execution? What's the point of having silly hype pieces back and forth when we're debating trivial examples that ought to be not too difficult to measure?

Finally, wrt to React: it's just a view layer. Comparing it to Ember or Angular as if it were a fully fledged, swappable alternative doesn't really make sense.

> Finally, wrt to React: it's just a view layer. Comparing it to Ember or Angular as if it were a fully fledged, swappable alternative doesn't really make sense.

This every time. If there's any bloggers among you: as soon as you start doing a 1:1 comparison between Angular and React, stop.

I (apparently incorrectly) thought they were similar enough to compare. What are the significant additional features that Ember and Angular provide?

Not sure about Ember, but Angular provides

- a full MV* implementation - routing - DI - decorators - CSP / XSS protection - a ton of services for mocking for unit tests ($http, $window, et al)

I'm sure I'm missing a fair amount of stuff here, too. This is just what I get out of Angular 1.x on a daily basis.

Ember provides everything Angular does plus:

* More mature router (IMHO, the best router out of all JS frameworks) * A CLI for developers to generate their models, views, components, controllers, routes * An opinion on where things should live and how they should be structured (ups the learning curve but saves time in the seemingly endless developer debates when discussing architecture)

> Or the strange lack of any demonstrable examples of the performance improvements achieved by this feature... except the comparison demos.

React is winning because of real-world experience. Blog posts are somewhat meaningless, arguing specific nuances back and forth. Who knows what's actually right. But when you actually sit down and learn React, and use it in a complex app, you understand how easily it lets you fine-tune performance, and instead of getting in the way it helps you along the path to blazingly fast UIs.

Nothing is magical out of the box, the key is to help the user along the way and that's exactly what React excels at. Performance is a definite factor for choosing React.

Software doesn't achieve the level of fame of React (or Ember, or Angular for that matter) solely by hype. It may have a brief period of fame, but several years long of building a passionate community means there probably is something there.

> Blog posts are somewhat meaningless, arguing specific nuances back and forth.

I disagree. Your "Bloop" blog post about React with its game loop analogy totally opened my eyes. That's the first time I really "got it". Moved my org to React for all new development and haven't looked back. So thank you for your "somewhat meaningless" blog post!

Thanks :) Times like that I go well out of my way to try to make the post meaningful. I'm proud of that post because it focuses on new ideas and applying them in realistic ways.

A lot of blog posts tend to be taking a few random facts out of context and making some disingenuous conclusion. I'm not saying the original post here is like that exactly, but I don't think you can really get much from small posts like it.

Going to the GPs blog doesn't seem to list a matching blog post. Got a link for me?

MongoDB ?

There are some situations where Angular 1.x's digest/compile cycle leads to degenerate performance and where clean solutions are all but unavailable.

Building arbitrarily recursive structures in Angular leads to degenerate performance[1]. In this example, I had to introduce artificial timeouts during rendering so that the browser's UI thread is not completely locked out for several seconds.

The crux of the problem is that to make a recursive directive, Angular needs to swap back and forth between `$compile`ing newly-added markup and `$digest`ing the `$scope` that is assigned to the new markup as a part of the linking phase. It is my understanding that this issue is not manifested in React.

1: http://embed.plnkr.co/1gOJjJ/

Actually, I believe I am facing a similar issue. I have directives that encapsulate other directives, and compiling 1000 of them is slow, but they run pretty well after they are compiled. Any thoughts on speeding up that process?

From a ClojureScript viewpoint, the whole "really fast" thing is a red herring anyway. I don't care whether it's faster than Angular at rendering some tables. What I do care about is the model: my views can now be functions of my immutable data structures and I can write components that react to changes.

There are two performance-related issues I care about: React doesn't modify the DOM if it doesn't need to, and my components do not even re-render most of the time, because they don't need to if the data hasn't changed (and thanks to immutable data structures the comparisons are really cheap).

It is this approach that is revolutionary. "Really fast" has nothing to do with it. Sure, I'm glad that I can easily create complex apps that have exactly zero performance issues right from the start, but I'm not going to compete in any benchmarks anyway.

I have been using Backbone + moustache/handlebar templates and I am not clear on why use a virtualDOM. My application has several views and in my views, I use events to sync data model changes with the view and the view's render function maintains the DOM element . None of my views have to deal with the whole DOM. Therefore, I am really confused. So with my apologies for asking a dumb question : why maintain the whole virtual DOM? And what am I missing with Backbone + mustache based approach ?

Say you want a form with as-you-type validation. You can't have a view containing the whole form and just re-render that as someone types. If you did, you'd lose the user current focus on every render. This forces you to manipulate parts of the DOM "by hand", something React abstracts for you.

I find Backbone events extremely hard to work with (error-prone, super hard to test / debug), because there are many operations that emit tons of events. So you either listen+render() on any event (very inconsistent performance if your app has any kind of complexity), or you hand-pick the events you want to react to (event-hell). Using immutable objects along with React's PureRenderMixin means you can just render() on any change, with great performance. It makes your UI purely functional, very easy to reason about, and easy to test.

> You can't have a view containing the whole form and just re-render that as someone types. If you did, you'd lose the user current focus on every render. This forces you to manipulate parts of the DOM "by hand", something React abstracts for you.

This is not how most modern view engines work. Somehow the React community has convinced the entire JS world that they invented the idea of "only render what has changed" when that's just not the case.

We're talking about Backbone + templating here, and that is what happens with those libs.

Obviously there are other libs / frameworks that can do that, all of them using some sort of shadow DOM.

You've got that backwards. Every other framework "only renders what has changed". React re-renders everything.

Reread that in the context of the parent's statement. There is a misconception that every other template library blows away parts of the dom on each update and that only React will do something like input.value=newValue, but this is not the case.

I sure don't see where that misconception is the fault of "the React community."

In fact, this comment is the first time I've seen that formulated. I'll be honest, I don't read every React-related forum entry on the Internet, so I may have missed someone somewhere spreading such a misconception, but there certainly is no such centrally communicated premise.

React only "renders" everything within its shadow DOM. It then applies the results of diffing the previous and current version trying to use the most atomic operation (innerText, add/removeClass, add/removeNode, etc)

Thanks. I get it now!

At Floobits, we started out using Backbone and handlebars because that is what we knew. About a year ago, we did a rewrite into React + Flux (our own implementation, because it was a year ago). The rewrite reduced KLOCs by something like 40%. Moreover, it radically simplified the code base. The big wins include one way data binding, updating views reactively, and removing boilerplate.

I did not realize it at the time, but two way data binding is evil. We spent a disproportionately large amount of time tracing (hard) bugs related to dispatching events. Bugs related to a child updating its parent that may or may not update the child again. Data should flow in one and only one direction. It may be possible to do this in Backbone, but its not encouraged when listening to models.

With any View system for the front end, you are necessarily responsible for creating the initial state of the DOM. You are also typically responsible for updating the DOM with your application state. If you ever run into a performance bottleneck, you will be forced to step out of your template system - you will either have to decompose your templates into needlessly small atomic chunks or resort to ad hoc DOM munging. Either solution is awful. React more or less lets you specify how to turn data in DOM in exactly one place for all time. In other words, if you care about performance, you will end up poorly reimplementing one of the best features of React.

And finally, Backbone has to be the most verbose JS framework I've ever used.

VirtualDOM shines when your document tree has a dynamic structure. If you never add or remove DOM nodes and just alter text contents or CSS classes then its easy to keep references to the internal nodes that need to be updated and to write "onchange" events to keep things up to date. However, if your document is more dynamic you can't keep references to the internal nodes anymore so it gets harder to write observables but its just as easy as the static case if you use virtualdom.

Thank you. This is the distinction I was looking for

Events firing en listenTo'ing all over the place makes it really hard to reason about what will happen to your UI when state changes. The bigger the project (or the more complex the UI), the harder it is to keep track of all the events and their side effects.

The authors "prove me wrong" attitude is awesome, and I hope we see some deeper diving into virtual DOM performance in a follow-up article.

It seems to me they're missing the point. The implementation of ng-repeat is quite complex, and if, for some reason, it doesn't do what you want and you decide to write your own directive you have to deal with mutating the DOM in an efficient manner yourself. React on the other hand allows you to just generate your DOM structure and be fairly confident in it performing well by default.

In the article:

> This little change ["track by"] invalidates 95% of comparisons between ReactJS and AngularJS.

Please, let's stop using arbitrarily specific numbers with data to back it up.

We worked really hard to try to optimize angular code to make it "Fast enough" for our cordova app.. On the web, it doesn't really matter because everything was <20ms, but on the phone, the same operation was >250ms. We tried lots of hack with angular, and eventually, I tried replacing the slowest view with React. And without hack it was so much faster. It's obviously possible to do the same thing with Angular by not using the dirty checking (or not using $scope at all), but at this point, you're not really using angular anymore).

Point is, for us, React led to a much cleaner and faster code. I was also waiting for an excuse to jump on ES6 and prefer the controlled state/props to the $scope/directives philosophy. We're not using flux but have been highly inspired by it and (Optmimizely's) nuclear.js library.

Developers have a tendency to get overly excited about speed. Fast database, fast search, fast framework, etc. The truth of the matter is that speed should be evaluated within a spectrum. As long as your application latency is within a spectrum, you should be just fine

Thank you.. for me React/Flux and the like just felt better than Angular 1.x while being fast enough. The tooling and support for something like Polymer just doesn't fit as well.

React with a flux-like flow is simply easier to reason with... When I've used Angular, I always hit points of frustration or weirdness that simply didn't make sense... the more advanced bits of React are less surprising in my mind. It's a shift in thinking about larger component based applications.

It's also worth noting that Angular was started some time ago in 2009 and likely in development farther back than that... this is before CommonJS or AMD were widespread, and jQuery was a rising star. This is very much reflected in Angular 1.x.

React is a different approach with a slightly more functional mindset (though I'm still not 100% sold on the structure). With a flux-like workflow, it's very easy to reason/structure your events and data.. yes events take some extra declaration, but your workflows become a lot easier.

Riot.js loops were fixed today. They are now around 4x faster than in React.

React: http://jsfiddle.net/brianmfranklin/w674Lv7p/ Riot: http://jsfiddle.net/gianlucaguarini/cbjuek58/

just FYI

Your benchmark for React is not correct because when you do a second setState to show the duration it will be scheduled after requestAnimationFrame.

While the article shows some really good performance improvements, the tone is really sensationalist. While challenging 'comparison demos' in leiu for proof, the author responds with... yet another comparison demo

The title is not entirely falsified either - ReactJS is fast out of the box. Other frameworks can be tuned to achieve comparable speeds. But in most cases, ReactJS still wins.

When I was using Angular I thought doing something like

  setTimeout(function() { $scope.$digest(); }, 0);
was an ugly hack and a sign of angular's leaky abstractionism showing up. Is this considered a good practice now (or always was)?

Edit: just to clarify, I'm asking because in OP this was given as a one of the way to fix Angular speed issues.

That's a sign someone has fundamentally misunderstood Angular and is hacking around their misunderstanding.

Why do you say that? What about it makes it fundamentally misunderstanding?

You should never have to use $scope.digest() unless you're testing a directive. You should never have to fire a digest loop manually.

It's the same thing as using $timeout; It runs a digest at the end after a setTimeout. I wouldn't call it a good practice, but it wouldn't bother me either if I saw it in a codebase. You are also welcome to use things like $evalAsync.

It's not the same thing, as the author points out since using `$timeout` in the example without an isolate scope was the source of the original problem. Instead he's suggesting you use $timeout without it's default behaviour (something that's possible now through the false argument or by calling setTimeout directly).

This is absolutely a leaky abstraction, and whether or not it's necessary sometimes when using Angular, it should still bother you.


As I was mumbling to myself in a recent other HN comment, it has become clear to me that few people take the time to actually understand the speed differences between various technologies. Even as "everybody" says how benchmarks are useless, "everybody" still uses the most trivial microbenchmarks to decide what's fast. (Perhaps the uselessness of benchmarks has more to do with "everybody" taking cognitive shortcuts than the benchmarks themselves....)

Amusingly, this produces the result that almost every technology is faster every other tech, with the exception of the technologies that are vastly more powerful than something else, but still at least as fast (i.e., Python is at least as fast as C at some task, so it must be as fast as C in general, right?).

(The natural reaction to that is to assume that there is no such thing as speed differences, but, alas, that's not true either. No easy answers! There are things that are faster than other things at some tasks. And there are jobs where you really need to know which is which because even today, the difference between a 50-node cluster and 1 machine that does it all is quite monetarily significant....)

The final point of the argument is the best to me. I originally chose using React because it appeared different from everything else. Thus far I feel I have been able to be much more productive and also think about things in my web application differently. The fact that is it fast is only secondary to the fact that I feel I can great a robust and maintainable codebase with more ease than I could with other frameworks.

Seems strange that you have to use "track by" to get a speed up when it's meant to be used for something completely different. Is this really a "fix" to slow ng-repeats? Are there any side effects?

How is something 310% slower. My physics professor would have slapped me for such sloppiness in wording.

I think many of the "advantages" of ReactJS are just hype

- Immutable data, one directional data flow are easier to learn, understand, harder to break etc. - This isn't anything new, these are just concepts taken from declarative programming. You could always have used those concepts in your JS. They aren't better/worse than imperative programming. That's like saying Haskell is better than C++.

- Two-way binding creates infinite loops! - Umm... not if you're a half-decent programmer. I've worked with complex single page apps for years, even with junior developers, it's never been a problem.

- ReactJS is so much faster - Only in unrealistic benchmarks as this illustrates (Mithril is faster in those benchmarks BTW)

React also has a few major downsides:

- JSX breaks your IDE's error checking and line numbers in error messages (and not using JSX is a pain/verbose)

- Being only the V in MVC leaves out too much. Now you have to patch together a URL router, http/socket communication script, custom solution for managing the model, etc.

Routing, http, and models aren't hard problems. Blaming React for not being an entire MVC framework is silly. It's not designed to be, and there are any number of ways to solve that problem.

Being able to use it with ANY different set of solutions for MC is especially good. I can use React for UI on top of old jQuery pages if I want to. I'm not forced to change my entire application to use it.

Decoupling is a really, really BIG benefit, not a downside.

.. until you're building a problem that requires regular external security audits, and you have to keep up with all of the micro-libraries you're using and any security issues they may or may not have. The Angular team has been very responsive to any security issues that have come up, and you get so much for free by using the framework.

models aren't hard problems?

OK, then show me a js model system that does one-to-one, one-to-many, and many-to-many relationships — while still having clear, concise and understandable code.

> JSX breaks your IDE's error checking and line numbers in error messages (and not using JSX is a pain/verbose)

Source maps. Unless you aren't using any form of minification or bundling at all, you need them even if you aren't using JSX.

The most interesting thing to me is that React's virtual dom implementation, according to http://vdom-benchmark.github.io/vdom-benchmark/, is generally the slowest of the bunch. You could pretty much move to using anything else and have a faster vdom implementation and smaller library. Additionally, many out there are so similar to writing React that I don't see how using React is a win over the alternatives. In my own experience on my machine, the dbmon example in this article for Angular is significantly outperformed by the likes of cito+t7, http://t7js.com/dbmonster/precompiled.html. Angular shows roughly 6fps topping out at 6.7 for me, while t7 is showing roughly 13fps topping out at 13.8. I don't know about anyone else, but 100% more performance isn't trivial.

React might be slower but it is more battle tested against multiple browsers than other vdom implementations, any vdom implementation might have to take performance hits to support older browsers like Internet Explorer 8

Nice, cito.js shows really nice performance.

I just smelt a load of prejudice here. Leave alone the technique detail(holding a debate which is better - Angular or React and here goes your rest of the day), IDE/error checking stuff isn't a problem anymore. Maybe you should try eslint, babel-eslint, babel-sublime or anything emerge since 21 century.

> That's like saying Haskell is better than C++.

But.... it is.

I have to disagree especially with your last point. I just use React to render the UI not to replace the entire backend application with JS. For that, React is perfect.

I don't tend to think ReactJS has many advantages, just differences.

Angular 1.x is optimized around creating pages, where as React (and Angular 2.x) are optimized around creating small components.

Angular 1.x uses two-way bindings by default and you can opt into one-way bindings. React essentially does the opposite.

I have had infinite loops pop up in Angular, just by having floating point numbers that don't "settle" down to the same value. Also, it is too easy to end up with to many watchers on a page.

Also, Angular is a much more complete solution than React but I have found react-router and the fetch api get me 80% there.

one-time bindings are not the same thing as one-way bindings.

> - JSX breaks your IDE's error checking and line numbers in error messages (and not using JSX is a pain/verbose)

JSX doesn't break line numbers in error messages because JSX to JS transform preserves line numbers.

The tl;dr I get from this is that angular docs don't do a good job of actually documenting "fast by default" options.

Ok the authors of 2 talks missed the Angular DSL to speed up the rendering of a 2D table. But what about deeply nested tree structure that change ? Is there a DSL to speed up that too ? If the author want a demonstration maybe he could try that.

You could still continue to use the track by syntax in the nested ng-repeats, but I would be careful. O(n^2) is always worrisome.

No kidding. I've been saying that frameworks like Angular, Ember etc. do their dirty-checking in the MODEL, whereas React, Mithril and others do their dirty-checking in the VIEW.

The former sometimes get some misses when the underlying data changes but the bound views stay the same. And how often does that really happen? On the other hand, "two-way data binding" is extra sugar that is, indeed, slow. Even Angular 2 got away from that.

So what is the point of using React or Mithril? Well, there are two. One is if you enjoy having idempotent rendering functions rendering EVERYTHING OFFSCREEN ON EVERY FRAME for making 1-1 correspondence between "state" and "view" explicit. The other is that Facebook actually uses it in their own products, and open-sources React Native, so you can technically build native apps (if you're willing to put up with their embedded JS environment instead of the browser's). And since they are so enthusiastic about it, maybe someday you will be able to re-use a bunch of components Facebook, or others, write for you, in your apps.

Personally, I will prefer http://platform.qbix.com ;-)

Setting aside all arguments in the article, I am just generally annoyed by the simple fact that he calls React a "framework" in the article. It's a library, not a framework.

Angular is a framework; it includes everything that you need (and in most cases, significantly more than what you need). You can build an entire web application using Angular without any other external libraries. On the other hand, you usually need to combine React with a router and some library to manage state (or stores/actions if Flux).

For this reason, we shouldn't even be comparing "Angular v. React". They aren't equal and they were not meant to be compared. If want to make a worthwhile comparison, try a "Angular v. React with Flux Architecture".

...but, honestly, can we just stop writing articles like this? What purpose do articles like this serve other than saying, "X tool that I use is better than Y tool that you use and here is why"? If you really want to share knowledge, write about best practices, anti-patterns, etc. Good riddance.

>People have sent me links to a number of other demos. All are simply missing 'track by'.

really people? I knew most of the people spewing the angular hate didn't know what they were doing but this is kinda silly.

Virtual DOM diffing (the approach taken by React) is "faster" because, in the most simplistic definition, you can intuitively come up with the optimization mentioned in the article as it needs to know what exactly changed to calculate. What's good with React is that it does some of it automatically (with the keys and the structure you have to provide) and also gives you control with the "shouldComponentUpdate" method.

Edit: Down-voters care to comment?

Personally I never heard of the alleged speed advantages. I'm only using React for its architecture, i.e. functionally "pure" UIs.

The jsblocks library authors are being incredibly deceitful with the tests they are showing off. Their entire premise is that they are faster than react but their tests are just manipulated. They posted this up on product hunt, http://www.producthunt.com/posts/jsblocks, and at that point it claimed to be at 800ms on initial load where as react had 1250ms. I checked out the react code and there was a hidden input that was being rendered with every td element on the table. React has very heavy inputs which is why you are supposed to use state.

original test (with hidden td elements)

js blocks: 800ms

react: 1250ms

After removing hidden element

jsblocks: 350-400ms

react: 200-350ms

So yes, jsblocks can render a lot more input elements that you shouldn't be using faster than react. The react team explains why the inputs are fairly heavy when rendering a large amount, https://github.com/facebook/react/issues/3771 They didn't stop there though and they responded to my claims and came back with a new test. This time they just ramped up the number of data points by double and added an extra td element to make it an unreasonable absurd amount of elements.

revised test (original test of 5000 elements to 18000 elements)

jsblocks: 600ms (they posted 700ms, so ill give them that)

react: 700ms (they posted 950ms)

After removing half of the data to about 9000 elements react already started to tie jsblocks

jsblocks: 450ms

react: 450ms

Going down to about 5000 elements

jsblocks: 250ms

react: 200ms

Whether or not there is any merit to this library, you can't just go claiming that you have the fastest library so audaciously like this blog post and their marketing. It's terrible for the community. There are an absurd amount of javascript libraries and getting devs to push a few key frameworks forward is already difficult enough. And if you are going to add another library to the list, at least be able to back up your claims.

I will admit that jsblocks seems to be better at rendering extremely large amounts of tabular data, but that is never an issue for me and you should be using lots of different techniques to mitigate that anyway. But once you get to realistic scenarios react is still faster and comes with tons of other advantages.

I didn't test the angular one because I don't know enough to spot odd code, but without even looking into the code the fact that angular was the only one that didn't have a minified library seems fishy. It's at 900+kb compared to the jsblocks/react at 125ish kb. To me it seems like since status qua is react is faster and angular is slower, they just constructed these tests to make it 3. angular, 2. react, 1. jsblocks (the fastest library ever!!).

So there's my rant. Please stop promoting your library like you are, it's just distasteful.

edit: https://drive.google.com/folderview?id=0BxTyg4RuMOHUfjlvTkN3... Here's a link of the original test without the hidden elements, the response test that required a ton of elements, and the email they sent still talking about their amazing speed!

I'm surprised this is the first mention of jsblocks, when the graphics displayed by the article shows jsblocks beating everybody, including React.

IMO Angular1 and React are so different in their approaches that it does not even make sense to compare them. If you want to base your application on a 'kind of' functional design with sane reasoning you should know what to use.

With Angular2 the discussion will be more interesting, but as far as I know Angular2 is not quite production ready.

Which is why we compare results, and not their philosophies and methodologies. It's like comparing CPUs from AMD and Intel, they have different architectures, but to understand real world performances, we compare results.

Except ReactJS and Angular are not the same class of thing. It's more like comparing an Intel CPU and an AMD motherboard.

I tried to quantify the performance numbers - and react does show virtual DOM rendering improvements, but eventing is expensive.


I really don't like Angular (mostly down to dependency injection, issues I had getting some of the components to work with mobile and it's markup ngAttributes mess) and it put me off React until recently. React is superb and a really new way of thinking about user interface components.

AngularJS is like writing in assembly. Sure, if you work really hard you can make it as performant as compiled C (ReactJS), but by that time your C buddy has built so much functionality you might as well stop all together.

(note: I've done professional angularjs and reactjs projects)

Ironically, he mentions ember which is in the process of adopting the virtual DOM based approach.

This isn't entirely accurate as ember essentially diffs the dynamic values, but not DOM (or vDOM). In Ember the DOM is typically produced from pre-compiled templates and values are used to hydrate those templates or choose which templates are rendered.

Obviously exceptions exist, but this is by far the most common scenario.

When ember goes to "Detect" what changed, is basically ignores the DOM, and looks at the dynamic joint values such as {{#if firstName}} or {{lastName}}. Using this information, in then decides what DOM mutations are needed to bring the DOM back into sync.

As a side note: Babel.js has some related optimizations for JSX/react uses.

This all means, for DOM creation and updates the actual DOM is used.

Now this may sound scary, as we all hear the DOM can be slow. But as it turns out, some aspects of the DOM are actually quite fast, and often faster then the alternatives.

For example:

* fragment.cloneNodes to produce new content * node.textContent to update content – nicely leaves content inert, without needing costly JS based XSS escaping.

There are obviously downsides to either approach and as such likely some hybrid is ideal.

Just a question - why is immutability considered a performance improvement when in general it leads to performance degradation? I always assumed it supports a nice conceptual framework when dealing with difficult parallel algorithms but that's about it.

Immutability helps solve one of the hard problems in computer science: cache invalidation.

Mutability can always be faster given perfect optimization, in the same way as self-modifying assembly code can always be faster. However, actually doing that optimization on a byte-by-byte level is basically impossible. Techniques such as immutability makes reasoning about how to optimize general cases much simpler.

This often ends up with the trade off of higher ram usage and lower cpu usage, but a good enough general optimization can give lower ram usage too. Bad uses of immutability done for dogmatic reasons can often give you higher ram usage, higher cpu usage, and more complex code. As always, you do need to watch out for what you're using where and why.

Not saying you're wrong, but I've never heard this argument... How do you figure that immutability helps solve the cache invalidation problem? Immutability will cause cache issues, not help solve them. The answer to the question seems to be simply that immutability allows quick comparison due to references being the same so a deep comparison is not necessary, nothing to do with cache-invalidation

I think what was meant was that immutability solves cache synrhoncization issues. More accurately, state synchronization issues.

If you think about the DOM as the projected state of an application, immutable data structures allow you to very simply define a functions that perform the transformation from application state to the GUI data structures (e.g. the DOM). If the state mutates this can cause cascading state changes that make this function much harder to reason about.

What are more common arguments you heard of? I'm just curious.

In the context of React, it allows you to implement a `shouldComponentUpdate` method which simply compares the references of two objects, which is extremely fast. If the references remain the same, then there is no need to invoke `render` again.

>> Is ReactJS really fast enough?


Is it really important? It exists to simplify app development, not to be fast. If you want fast, use similiar tools based on virtualdom module

Speed is just one concern but there is a huge ecosystem and native aspects that is more than just one dimension.

From other devs I know they have said React functionality on mobile is quite poor. Drag and drop and scroll, doesn't work as well, because the virtual DOM isn't adapted work to well on mobile. So for that reason, I won't use React, and rather go Vanilla or use Backbone. After getting burned by Angular, I'm really weary to adopt another fancy framework.

Drag and drop/scroll should work on the real DOM not the virtual DOM, no? I use Drag and Drop in a web app (using Om, which is a clojurescript library built on top of React) and it works just fine across all platforms.

It's not everything is about speed, I would choose manageability over speed anytime

Do you use ReactJS because it is fast?

my experience with react is that it's really fast.

Shows how far too many developers don't question claims but take this stuff for real. Put it on a nice website with a cool domain name, fake some statistics and voila: Your new, web3.0 tech is out there. Bonus points if you are Google/Facebook/Apple.

Even if the article itself will be debunked as wrong, i feel that too many tech-savy people are too superficial.

I think the bigger appeal of React isn't the "performance." It's the fact that you can reason about your app as if you are re-rendering the entire application with every change in state. That way you never have to think about mutating the DOM, and you have less places to screw up.

If you actually re-rendered the whole app with every state change, it wouldn't be performant, but with the virtual DOM, it's totally feasible. It is definitely unfortunate that developers take statistics for granted (I'm guilty of this), but it doesn't undermine the usefulness of the framework.

> It's the fact that you can reason about your app as if you are re-rendering the entire application with every change in state. That way you never have to think about mutating the DOM, and you have less places to screw up.

This is really no different than Angular's value proposition - everything is bound to the DOM via the scope and re-renders automatically with every change in state.

Ah yes, BUT you can't deny that one of the major selling points of React after its announcement was its superior performance due to its DOM magic. May be one selling point, but nevertheless a major selling point.

Now, every one says "oh, but i don't care for the performance, i like that React can do XYZ" instead of proving the author wrong. Why? Because in truth 99% of developers would have no idea how to do it. I'm convinced that there is a significant amount of web developers who wouldn't even know how to write a web page in plain JS, CSS and HTML without the use of a myriad of tools that generate stuff for them. Same for other areas. If not for aphyr most people wouldn't even know how to test a database.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact