This looks quite interesting - I got the impression that React would need quite a rewrite internally to accomplish incremental rendering.
One thing I noticed with the priority mechanism proposed - Angular 1 has something like this, but it turns out to be a complex api to understand and use. In fact, just about everyone stays away from mucking around there, and all usages of it I have seen in the wild is a straight up hack/misusage. I suspect that something like this would increase complexity dramatically.
Animation is a problem I feel might be worth studying different systems on, maybe not even just the browser. For example, Angular's implementations have ended up mirroring how Chrome handles animations after collaboration with the Chrome team. It probably shouldn't be singularly focused on how one browser has implemented it, but studying these systems is probably best for maximizing extensibility and performance.
Wow I'm at the bottom of the page and this is the first post about the actual article. One thing I was wondering was whether the approach they are taking might actually increase the complexity rather than decrease it. For example, when writing games, it's pretty common to have your physics engine and your rendering engine run at different frame rates. You get the incremental rendering by separating the two. This approach seems to tie the rendering of the internal model to the final rendering.
To be fair, I haven't really spend much time considering the JS animation API. I suspect this is really what's at fault here -- there may be no easy way to update the animation after the fact. Still, I can't help thinking that it would be better to try to separate the screen rendering from the internal model rendering...
Yes, this was my thoughts also. I've got a hunch that concerns have been incorrectly mixed here, leading to unnecessary complexity. But it's too soon to tell for sure
I believe the idea is to expose as little of the prioritization details as possible. E.g. updates triggered by a text input's change event will automatically receive higher priority. Or a DOM element with display: none will automatically receive lower priority. Etc.
The proposal doesn't sound like that was what was intended - it seems like it is left to the user to implement the fibers, with the work priority to be set by the user somehow. Indeed, a quick glimpse of the commits of the implementation reveals a priority system of values exposed directly to the user when creating components (see unit tests).
acdlite is correct here -- most users won't need to worry about priorities and it's unclear to what extent we'll even expose them. The unit tests do not describe a public API (and we're not close to settling on one).
Thanks for the clarification! I've been flying 16 hours today (Milan to SF), so I'm sure I didn't grasp the full significance of the concepts in the doc.
Can someone explain the benefits of React's DOM diffing model?
Rather than try to diff two DOM trees and optimize reconciliation, why not use one-way data binding and update exactly what has changed, with 0 reconciliation cost?
Either way, the upfront work - linking DOM elements with model attributes - is the same. In JSX this is done by interpolating variables into the template, and in one-way data binding this is done via data attributes.
Anecdotally, I've found one-way data binding (using Rivets[1]) to be very fast in practice for a view hierarchy ~10 layers deep.
Rather than try to diff two DOM trees and optimize reconciliation, why not use one-way data binding and update exactly what has changed, with 0 reconciliation cost?
If all of your rendered HTML corresponds 1:1 with the underlying data model, there's no particular advantage to the declarative approach used by React.
However, once you start to have relationships in the underlying data, so DOM events in one place can affect the desired DOM content somewhere else, simple one-way data binding is not sufficient.
For example, suppose we have an HTML table that needs to be rendered from some underlying data structure in our JS. How do you do that with simple one-way data binding -- what are you binding to what, exactly? If you're managing the HTML directly, you need to allow for things like rows being added or removed, not just updating the content of an individual cell.
Now suppose that table also allows you to sort by different columns and filter so only certain rows are shown. Rendering the correct HTML now requires keeping track of the underlying data model and all of the UI-related state, with the actual rows to be shown affected by several different factors.
Now suppose the table is going to have many rows, and pagination is required in the markup and controls.
You certainly can handle all of the related events manually, but in order to keep the rendered HTML correct you now need to write code that understands each possible change in the underlying data and UI in every possible context, and the combinatorics can get quite unpleasant here.
The alternative that declarative rendering tools like React offer is that you can specify what your rendered table should look like in absolute terms, using whatever inputs need to be considered but ignoring the existing state of the DOM. You still need to consider however many factors you have that affect the rendering, which might still be somewhat complicated in a case like this, but at least you only need to consider them from a single, neutral starting point, not relative to every possible starting point.
I don't get how your table example justifies the need for such a complex rendering pipeline.
There are plenty of template libraries that can optimally re-render a list/table after row insert/remove/move operations. Events generated by elements in rows can easily be associated with the proper data item.
React doesn't own declarative templating, and if you only use one-way databinding, and re-render only what changes (again, plenty of template libraries do this), you get `v=f(d)` and efficient updates. Why the extra complexity?
The table was just an example. The point is that if you use a declarative rendering approach, any non-trivial rendered output you want to generate only needs to be specified once, relative to a neutral starting point.
React certainly doesn't own the idea of declarative rendering, I agree, but what template libraries do you have in mind that offer that sort of functionality without either being limited to some specific case (tables, say), having questionable performance properties, or implementing the same sorts of techniques that virtual DOM-based libraries like React are using?
Perhaps it would help if you could give some concrete examples of the libraries you are thinking of, and point at some documentation (or give your own example code) to show how they would handle the kinds of events I described in my post above more efficiently and/or with cleaner code than a library using declarative rendering and a virtual DOM such as React.
No, I'm talking about arbitrary relationships in the underlying data model, such that arbitrary events from anywhere in the DOM can lead to arbitrary changes in rendering anywhere in the DOM.
Computed properties of the kind you linked to, when then used to drive rendering via one-way data binding, are a common but relatively simple example.
And yes, that sort of direct DOM manipulation is likely to be faster, often much faster, than DOM diffing in the cases it covers. The advantage of the more general diff-based libraries is their generality, not their rendering speed.
Just imagine any non-trivial set of relationships in the underlying data, and then some rendered content that doesn't just depend on individual data points.
For the record, I haven't actually seen anyone address my earlier example with the sorted/filtered/paginated table yet.
Another typical example might be a complicated form where some fields are shown or hidden or have different options available depending on the values of other fields, perhaps including circular dependencies.
Another might be some sort of dashboard that adds a tab/tile/panel for each of some varying set of data sources and adjusts the layout or level of detail depending on how many sources need to be included.
Another might be rendering a chart showing several different data series over time, where the scales and positions of the axes are adapted to span the entire data set.
Of course you could manually update the DOM in response to relevant changes in the data in all of these cases, but keeping track of all the different variations and transitions gets old very quickly once your UI has a few different cases like this.
I think I understand what you're getting at - correct me if I'm wrong.
React's templating allows you to execute arbitrary JS logic when rendering, which is more versatile than simple one-way data binding.
My thoughts there are that I'd probably extract the logic first into the view model before passing the buck to the templater. For example, instead of filtering/sorting a collection in the template, I'd create a `normalizedCollection` computed property in the view model first (which helps with testing as well). In practice I've found that Rivets' binders are enough - showing/hiding/looping over data covers nearly every UI.
In the case of bigger chunks getting un/re-rendered (such as adding tabs/tiles/panels), you can take advantage of a framework that does sub-view management (such as Marionette[1]).
Computed properties can solve the hidden fields and multi-data series chart problem you mentioned.
React's templating allows you to execute arbitrary JS logic when rendering, which is more versatile than simple one-way data binding.
Yes, that's what I'm getting at here. In particular, you only have to write that arbitrary JS logic once to cover both the rendering and all the updating cases, so you get the same kind of automagic monitoring in your view that data binding gives but in the general case.
As an aside, nothing about React precludes using a separate view-model style layer between your base data model and the rendering logic to deal with computing derived information. Indeed, this is often useful as a form of cache if you have computationally expensive work to do when certain source data changes, for example to summarise a large data set or compute the layout for some complex visualisation. Again, computed properties seem to be a variation on this theme.
No doubt there are other ways to achieve the desired results, though inevitably they need their own version of plumbing to connect everything up, and in quite a few that I've seen over the years it's still difficult to co-ordinate updates and maintain good performance as systems scale up.
The main advantages of a library like React, in my experience at least, are that it is simple and universal in its design and interface, but also reasonably transparent about both behaviour and performance characteristics, with enough hooks that you can help it out in cases where the basic usage is inadequate.
I'm with you. All this complexity is baffling, when there's a very optimized rendering engine sitting there in the form of the browser, and soon a suitable component model.
If you wire up your components so that you just re-render the components that have data changes, you get the benefits of dom diffing, without having to diff.
You can skip all this framework JS, and just directly write an async-rendering Web Component. It's pretty compact and easy to understand. I just wrote up a sketch that would work with one of the incremental-dom-based template libraries:
class AsyncWebComponent extends HTMLElement {
constructor() {
super();
this.isLayoutValid = true;
this.attachShadow({mode: 'open'});
this._foo = 'bar';
}
// Accessors could be generated with decorators or a helper library
set foo(value) {
this._foo = value;
// You could check for deep equality before invalidating...
this.invalidate();
}
get foo() {
return this._foo;
}
connectedCallback() {
this.invalidate(); // trigger the initial render
}
render() {
// Call into template library to re-render component.
// If the template is incrementally updated (say with incremental-dom),
// then only child components with data that changes will be updated.
templateLib.render(this.template, this.shadowRoot);
}
invalidate() {
if (this.isLayoutValid) {
this.isLayoutValid = false;
// scheduleRenderTask enqueues tasks, so they'll be run in order down the component tree
// A simple and correct scheduler is to just enqueue a microtask with Promise.resolve().then(task)
scheduleRenderTask(() => {
this.render();
this.isLayoutValid = true;
});
}
}
}
When the parent component renders and sets `foo`, the child will schedule a task to re-render.
In JS:
let e = document.querySelector('async-web-component');
e.foo = 'baz'; // e schedules a task to re-render
e.foo = 'qux'; // e doesn't schedule a task, because one is pending
This is a sketch of a raw custom element with no library for sugar. A library that assisted in the pattern would presumably implement the accessors for you.
ES/TypeScript decorators would be an easy way to do this, or just a function that fixes up the class after declaration.
What if your component has a lot of nodes? Do you re-render them all? I guess that's the reason they made react. Even if the state of component changes most of the DOM nodes of render component stays the same and re-rendering them would take too long.
So Aurelia components are web components? I can create one with document.createElement('my-aurelia-component'), and container.appendChild(), and the element will fully work and have it's lifecycle callbacks called?
I tried to look in the documentation, but couldn't see this clearly stated. It looked like Aurelia would have to be in charge of lifecycle to get dependency injection to work. Specifically, custom element constructors are called by the browser with no arguments: how does this work with Aurelia components that expect to receive constructor arguments via dependency injection?
If this really creates a Web Component, how does `this.http` ever get set? The browser will call the constructor with no arguments.
What exactly gets passed to `document.registerElement()` (for the v0 Custom Elements API) or `customElements.define()` (for the v1 Custom Elements API)? Are you saying that Aurelia generates and registers a separate custom element class? That's not in the docs. If so, how does that element find the instances to inject into the user-defined constructors?
The main issue that talks about custom element support is still open, and the answer really does seem to be that Aurelia can't create custom elements: https://github.com/aurelia/framework/issues/7
This unfortunately seems to confirm that Aurelia uses the terminology of Web Components, but doesn't actually use or create real Web Components.
That link you just posted is not contained in any of the docs or links that either you or I posted above. I can't actually find it in the entire documentation hub, so even if I did "read the docs", how was I supposed to find it?
It's component diffing, not DOM diffing. I agree that the later is unnecessary. omponent diffing is an implementation detail that enables the declarations that makes React worth using.
Imagine a parent component that conditionally composes a
Button that is either red or blue. You could track that button instance and flip the property if necessary. But that requires a lot of error prone boilerplate, as most real applications might have many states depending on many inputs.
With React you just define which variant you want in the render function of the parent. No state transitions necessary. React then does the hard work of understanding that you rendered a red button then rendered a blue button, so it should reconcile the two.
Data-binding is crap for many reasons. Here's a big one: inability to use immutable data types, except for the trivial case of a static component.
When you're diffing the vdom, the system doesn't care where the data comes from, i.e. the data can come from a plain js array or an Immutable.js list.
Second reason: When you have to worry about how to get your model to play nice with the view, that's a leak in the abstraction. This applies to ALL data-binding systems.
This is very cool, but the nagging issue I have with React is this desire to reimplement everything in JavaScript and bypass the Browser, DOM, CSS, etc. I've lost track of what benefits this really offers?
Could you elaborate? All React.js "reimplements" are components for DOM elements and custom event dispatcher that allows you to don't worry about browser quirks and differences in handling events.
There's also large difference between what DOM and VDOM mean and VDOM's not DOM reimplementation.
Why write every component in 3 different languages artificially forcing it split into multiple sections when you can just develop it in one cohesive unit using the most powerful one?
Well partly because React is not that fast, and things like incorporating CSS animations are not particularly easy.
I appreciate FB is working on improving these areas, but original point is that maybe a lot of this optimization would be unnecessary if worked with the browser capabilities vs outside it in JS.
Its batch DOM writes of virtual DOM diffs ends up being a lot faster than previous SPA development models which can be further optimized with Pure Renderer. It has room for improvements but that doesn't invalidate React's approach.
Its batch DOM writes of virtual DOM diffs ends up being a lot faster than previous SPA development models which can be further optimized with Pure Renderer.
Sorry, but that isn't necessarily so. React is a lot faster than many of the other front-end frameworks, but that's because most front-end frameworks are horribly, horribly slow. However, all that rerendering and DOM diffing isn't free, and React can easily still be much slower than applying manual DOM updates if you have the patience and accuracy to write the code to do them, even if you use pure renderers and write shouldComponentUpdate everywhere.
They're taking a comprehensive look at how imperative (React) or declarative-autotranslated-into-imperative (JSX) code for describing a View actually becomes that View (which is currently either (V)DOM, or some native UI toolkit with React Native).
This process occurs in browsers, libraries, and UI toolkits today; React gives you an alternative. This means that you can render in the client, render on the server, render wherever starting from the same code.
Well it doesn't simplify development and that's the point, it's not a clean separation of concerns.
It defies the whole purpose of using CSS, the DOM API and keeping your javascript's structural patterns cohesive. This is why Webcomponents are now part of the living standard. The browser runtime nullifies whatever React was trying to achieve.
>Well it doesn't simplify development and that's the point, it's not a clean separation of concerns.
On the contrary, it's very clean.
The real separation of concerns is between business logic and view code, and React does that perfectly.
Not between HTML, CSS, DOM etc which are artificial inflated concerns due to how the browser ended up as an ad-hoc application coding platform (from it's "document" viewer beginnings, which is where the "D" in the DOM comes from).
(And of course nothing stops you from using CSS and external styles with React, separating style from behavior).
>This is why Webcomponents are now part of the living standard.
Web components only solve the non-interesting parts of what React does. Namely, isolated components. All the state and management mess for the entire app is still yours to deal with. Even React alone does more, nevermind React+Redux/FLUX etc.
Which is also the reason React caught like wildfire, and nobody much cares for Webcomponents (e.g. not any statistically significant numbers).
I generally agree with your position here, but I think you're giving React a bit too much credit for separating business logic and view code. In practice, anything complicated probably requires shouldComponentUpdate for acceptable performance. Writing reasonably efficient shouldComponentUpdate in turn requires underlying data that can be compared quickly, hence for example the current interest in immutable data structures that can be tested for equality by checking a single reference. And so the choice to use React for rendering does have implications for how the underlying data is stored as well, which undermines any claims about truly separating the view logic from the business logic.
Sure, all abstractions are leaky one way or another, but in the end React gives you much better separation than what we had before.
Having to deal with shouldComponentUpdate is a blessing compared to having to juggle 4-5 different concepts and web technologies, plus manage state, plus separate logic yourself, etc.
Heck, in Backbone, which is as bare as it gets, and you needed to wrap your data in specific classes...
And that's IF (and it's a big IF) you have some very complex performance case. In most cases I never needed it, and tons of stuff can just be a pure render function.
Again, I generally agree with your comments here. I just think it's important to point out that this particular leak exists, because a lot of React advocacy completely glosses over the point. The need to have a data model that supports quick diffs one way or another has profound implications for the scalability of your front-end code and it's an architectural decision that will be expensive to change later if you get it wrong initially and learn that the hard way.
It's rendering part of the virtual DOM in one cycle, rather than the full thing. This goes hand-in-hand with the ability to prioritize the rendering of different components.
For instance, when you're scrolling through a large table, you may want to prioritize the rendering of a custom scroll bar over the rows themselves. With the Fiber architecture, you can do that so the scroll bar can hit a responsive 60fps and the rows can fill in when there are spare cycles.
Does it strike anyone else that the entire battle cry of react reeks of pre-mature optimization?
Let's load a giant javascript framework, slow our onload and page ready event by half a second, overcomplicate our build infrastructure with JSX, mixing up our declarative code with non-standard imperative/functional javascript and HTML, start importing sass and CSS into our javascript files, and use an API that is weird and borrowed (componentDidMount?).
What exactly is the savings here? I fail to see it. When and if you have 100,000 DOM elements on a page and need to do efficient rendering and reconciliation of what has changed? Poor code organization? Adherence to a weird and clunky API? Because Facebook does it?
I'm sorry but I've had the displeasure of working on a few apps where developers have used React and I could have done the same thing, without all the bloat, achieving much faster performance, both measured from first byte to when the page was ready for the user to use, to any interactions on the page. All of these apps were relatively simple single page apps.
What has happened to us as developers that this sounds like a good idea? Whatever happened to pragmatism? Has that just lost by the wayside of the new shiny?
It isn't so much pre-mature optimization rather than having the next step of the evolution of modern webdev.
Let's go back 6 years in webdev land. jQuery was king, Angular was the up-and-comer. You have a fairly complex jQuery site probably rendering most things from ruby or php. The only way to debug it was with console.log and refreshing the page.
Angular comes around introduces modules and unit-testing, cool! You hook up grunt and protractor, you install angular batarang. Everything is good. But you end up building this huge angular app. You try to debug this edge case when you have this panel open under this tab. You get frustrated that you have to click on the same things over and over again. Livereload can inject css but not js. When you change the html of your angular directive you have to reload the page too.
This is where React comes in. React + Redux + Webpack makes it such that you can apply livereload to _everything_. The css that your component imports, the html it renders, what actions happen when you click a button. Everything. It's not necessarily the new shiny rather than just the next step to making things easier for devs. It gets out of your way so you could focus on styling, structure and business logic.
> What exactly is the savings here? I fail to see it.
Huh? Did you read the article? One answer (which as a React user I think is the main one) is right there. To quote them,
> The central idea of React's API is to think of updates as if they cause the entire app to re-render. This allows the developer to reason declaratively, rather than worry about how to efficiently transition the app from any particular state to another (A to B, B to C, C to A, and so on).
If you disagree with this, you should explain why.
It sounds like you worked on a few projects where people reached for a JavaScript framework when it wasn't necessary; I can understand how that can be frustrating. But surely you aren't saying that there isn't any room or merit for front end frameworks at all?
By front end framework standards, react isn't giant at all. And in terms of internal complexity, most people using react don't even have to know about any of the things in the post
I think it may be helpful to consider the possibility that there exist people who are more rational, pragmatic, and intelligent then you or I, who think React is useful for certain types of work. Assigning the motivation of "chasing shiny things" to people liking something you don't will be detrimental to gaining further understanding
It's no longer considered premature optimization when you have 100,000 DOM elements on a page, like Facebook conceivably would. It's not surprising that this project comes from them.
Does a run-of-the-mill JS app require React? No, there are hundreds of alternatives from direct state mutators like jQuery, all the way through full MV* frameworks like Angular, Ember, Backbone, or newer.
React is cool and innovative in its problem domain, but not all users of a technology will use it solely on its merits. This is frankly a risk with any 'popular' technology.
You might want to re-read your comment. Your argumentation is only relevant for grossly over-engineered projects.
React isn't about the number of elements on the page or the performance, it's about making it possible to build the UI in a declarative way. It raises the barrier of entry but immensely improves clarity, testability and maintainability.
If you ever work on a non trivial frontend with people who are well versed in the React stack and functional programming, you're in for a ride full of amazement.
More than once I chose to use React (+ Redux, etc.) for something that could've been solved with a few snippets of jQuery or even just plain javascript.
In many of these cases the app ended up becoming complex enough that I was happy to have prematurely optimized. This has happened often enough that I tend to give React the benefit of the doubt when I'm building an app. (This is also the reason why sometimes I choose to use batteries-included Rails even though Express.js or some lighter framework seems like a perfectly fine choice.)
In other cases using React didn't make things easier, but doing so was convenient because I'd been working with React most of the time anyways (and had some other project to use as a basis or even entire components I could re-use).
All that said, I do err on the side of shiny sometimes, and spend much more time (and Kb's) using React for something where it wasn't necessary. So I see your point.
The best thing about React / Relay / GraphQL is the emergence of the idea of using a graph query language. Will GraphQL be the ultimate winner? I don't know. But I suspect this approach will catch on, since the problems with RESTful interfaces are well known, and painful enough to force a change. And yet, once we start using GraphQL, the other technologies get dragged along with it. We want pagination so we use Relay. We want immutability on the fronted, so we use React. It's a powerful combination, though I can imagine the ideas being copied by other frameworks, who perhaps offer a cleaner implementation (my biggest grip with GraphQL is the verbose mutations).
It's a tightrope. Their are plenty of arguments to be made for having a large build system that makes development easier/more conventional. Their are also arguments to be made for minimalism and sticking to the fundamentals of the web with plain html/css/js.
What we can all agree on is that its good to at least have the CHOICE of development environments, build systems, and ecosystems. And I personally always fall back on: "this complexity is why I get paid more then most people".
>And I personally always fall back on: "this complexity is why I get paid more then most people".
This is pretty cringeworthy. If your tools are introducing enough incidental complexity that it needs to be reflected in your compensation, i'd argue your tools are failing your company badly.
You should be paid more to reflect a higher level of technical or domain expertise relative to other engineers, not to account for your ability to glue together a mish-mash of poorly documented and half-baked technologies.
I don't write tools that are unnecessarily complex in order to make more money.. I get paid to use tools that are necessarily complex, and use them properly. How you interpreted my post to relate entirely to the former is beyond me.
Just built our first major react/redux site. Moderate complexity, I'd say. React runs pretty fast, but on page load we take 2.5s to compile all the JavaScript.
In a large application my rule of thumb is 1/3 compile, 1/3 React internals (which includes DOM updates) and 1/3 product code. 2.5 seconds is a lot, but not surprising if an application is being shipped as a multi-megabyte bundle.
Compiling as a build step doesn't get you very far when you are forced to ship source code to JS engines.
I guess I may be a bit confused on what he meant by compilation
Since JS is JIT compiled, I assumed perhaps he was delivering his scripts in raw JSX+ES6 and transpiling at runtime via babel's browser.js (as shown by react's Getting Started guide) [1].
If he meant time spent parsing the JS then I'd be a bit suspicious about the 2.5s both because of the extraordinary length and also I find it hard to believe someone who would know how to measure parse time would mistake the term with compilation.
Could you elaborate on that? My experience with React has been that most of the 'shiny time-waste' problems don't arise from, say, React + Redux, but from the itch to use all the 'cool stuff' that people are talking about.
I've wasted so much time configuring apps to get hot module reloading to work, adding all sorts of redux middleware, using react-router as it transitioned to a completely new react-router, working with higher order components, using some immutable library, etc.
Many of these things were not necessary for the stuff I was building. In fact, in a bunch of cases I could've just used React and nothing else. I suspect if I'd done that, it wouldn't have felt as much like premature optimization.
Thankfully most of these projects were personal, and learning a lot about the React ecosystem has been fun. Still, lots of premature optimization.
I'm very curious to hear where you ran into issues.
I wouldn't say that React is particularly huge compared to other frameworks. Certainly when if first came out it was actually quite small. I haven't checked lately, but I think it is still considerably smaller than things like jQuery.
But you answer your own question in the second paragraph. Why use React? Because you want to use a non-standard functional style of Javascript. For me, it's a pretty big advantage, but if you don't like functional style programming and can't deal with persistent, immutable data structures, then I think you would be wise to avoid React.
To a certain extent I can sympathise with your feeling. Almost every time there is an article linked here on HN I see people getting into trouble with React. Usually the solution they reach for first is to strap on more plugins (with Flux, etc). Often the structure imposed by those plugins help them be more disciplined, improving their code, but at the cost of added complexity.
React itself is very simple. The trick is in understanding how to use it. Things like componentDidMount are useful for very strange corner cases, but it is pretty common to see people use it in every single component they write. If you pull componentWillReceiveProps, etc out of the box, then it should be a gigantic clue that you have done something very, very wrong. Mostly there are warnings in the documentation about this. Mostly people ignore/don't read them ;-).
JSX is really a non-issue for me. Personally, I don't understand why anyone would want it. You can write exactly the same code without JSX and it looks pretty much exactly the same (except with lots of insane, silly parentheses -- but, we're here for the functional, right?). Whatever. Normally I have to browserify my code anyway. Adding another step to the build is not exactly rocket science. I'm also usually using either coffeescript or ES6 through babel, so I've got a compile step either way. The times where I insist on no JSX is when I'm doing ES5 work and want to run unit tests in Node without the compile step. But that's pretty rare.
Having said all that, I'll take the opportunity to offer you some advice. I don't know if you are in the mood to receive it, so it might be fruitless. You said:
> Whatever happened to pragmatism? Has that just lost by the wayside of the new shiny?
One of the things about smart/talented developers is that they are used to growing up thinking that they know the right answer. I was certainly of that mould. As I've gotten older, though, I've realised that this way of thinking is self-referential and ultimately limits your growth.
In this case, I think you've decided apriori that the way you are comfortable doing development is already the best -- or at least so much better than what you see elsewhere that it doesn't matter if it's the best. When you see people struggling (and failing) with something new and unfamiliar, the natural inclination is to assume that it is no good.
Did you know that back in the day, many programmers thought that MFC was an exceptionally good idea? Anybody forced to use that anachronism now would probably quit on the spot. Things do get better over time and new frameworks and techniques can improve our lives as programmers. While it is pragmatic to stick with what you know, you have to balance that with learning new ideas that make you a better programmer.
Humility has always been something I struggle with, so perhaps I am the wrong person to say anything. However, when you see someone making the same mistake that you made over and over again, it behoves one to speak out, doesn't it?
setState is a public API, usually triggered in response to a user event like a click or an input change event.
State in React is local to a component, but it can be passed down to a component's children in the form of props.
Centralized state can be achieved using a library like Redux, where an individual component subscribes to an external data store's changes. Redux abstracts the details away, but under the hood it's still just setState.
One thing I noticed with the priority mechanism proposed - Angular 1 has something like this, but it turns out to be a complex api to understand and use. In fact, just about everyone stays away from mucking around there, and all usages of it I have seen in the wild is a straight up hack/misusage. I suspect that something like this would increase complexity dramatically.
Animation is a problem I feel might be worth studying different systems on, maybe not even just the browser. For example, Angular's implementations have ended up mirroring how Chrome handles animations after collaboration with the Chrome team. It probably shouldn't be singularly focused on how one browser has implemented it, but studying these systems is probably best for maximizing extensibility and performance.