
React, Inline Functions, and Performance - kawera
https://cdb.reacttraining.com/react-inline-functions-and-performance-bdff784f5578
======
hliyan
I'm irked by the frequent use of "premature optimization is the root of all
evil" without actually comprehending what Donald Knuth meant to say: "We
should forget about small efficiencies, say about 97% of the time: premature
optimization is the root of all evil. Yet we should not pass up our
opportunities in that critical 3%".

This quote is from an era where hardware resources were at such a premium that
developers obsessed excessively over the performance impact of every line of
code.

~~~
joshuata
Also, you have to design for performance. There are plenty of ways to
architect code that lock you out from future optimizations. I agree that small
optimizations such as different referencing techniques should be measured, but
you should be able try both out, and "premature optimization is evil" often
keeps you from optimizing ever.

------
Silhouette
_Ask any performance expert and they will tell you not to prematurely optimize
your program. All of them. Yes, every single one of them. 100% of people with
deep performance experience will tell you not to prematurely optimize your
code._

I have spent much of my career working on performance-sensitive and/or
resource-constrained systems of one kind of another, and the above is simply
not true.

Performance is like security or reliability. Yes, sometimes you find local
weak spots, and sometimes you can do something locally to improve them.
Profilers are great and changes should be based on measurements of real
performance; no argument from me there. But in general, these are global
concerns. You can't always retrofit "acceptable performance" using a profiler
and local changes down the line, any more than you can retrofit "robust
security". If it matters for your application, it needs to be considered from
day one, and your overall design at least needs to _allow for_ doing the right
things even if you choose to not then implement those things in full
immediately.

Needlessly busting your cache is a common source of performance problems on
many modern platforms, and that is essentially what you're doing by passing
inline functions as props when rendering React components whose render
algorithm is expensive enough to use shouldComponentUpdate.

Ironically, from the discussion later in the article, the author does seem to
understand to some extent, so it's frustrating that they still lead with such
a patronising introduction, and that they keep generalising their own
experience when not everyone will be working on the kinds of projects where
the same assumptions are valid. There are some useful ideas in this article,
which I'm sure would be relevant to a lot of React-using developers, but they
get a bit lost among the strange and/or dubious assumptions that underpin much
of the author's arguments.

~~~
baddox
Doesn’t “doing premature optimization” just mean “doing optimization before
you ought to be,” and thus the phrase is a tautology if provided without any
context or explanation?

Obviously any useful advice isn’t just “don’t prematurely optimize,” but
rather explains how to _recognize_ premature optimization. I like the advice
in this article to write things naturally until you have data that
demonstrates that a refactor will improve performance.

Obviously experienced engineers will make deliberate optimizations up front
without measuring. A great example is making sure your ORM uses a join instead
of making N+1 queries:
[http://guides.rubyonrails.org/active_record_querying.html#ea...](http://guides.rubyonrails.org/active_record_querying.html#eager-
loading-associations). Yes, a small app could probably get away with N+1
queries for a long time, but I wouldn’t consider writing this properly to be a
premature optimization.

~~~
Silhouette
I suppose my point is that for applications where performance is a significant
issue at all, there may be no useful interpretation of "premature
optimisation". You have to design with at least the ability to code
efficiently from the start, without relying on abstractions that hide
performance traps. Otherwise you can back yourself into a corner where the
only way out is to redesign or even entirely rewrite large parts of your code,
and I think it would be generous to consider that sort of change a mere
optimisation.

------
kitten_mittens_
Overall, I think the post is a little too dismissive about the performance
implications of bind, and says a new function trigger rerenders.

In IE 11 and Safari for example with Babel transpilation, arrow functions are
going to eat your frame rate alive, especially if you have a text input
handler with some sort of validation around it. Even in latest Chrome and
Firefox, you end up with inputs that feel slow.

I agree PureComponent and object equality cancel the benefits of each other
out a good deal of the time.

The pure without handelers approach has the notable exception of a sortable
table with inline editing, or even checkboxes, where row index of the handler
matters.

Ultimately, it is better to make new devs to react aware of avoiding bind in
render, than it is to retroactively add constructor placed binds in the app
when frame rates are below 25-30 by default. Something I’ve seen quite a bit
in redux apps.

~~~
JasonSage
> In IE 11 and Safari for example with Babel transpilation, arrow functions
> are going to eat your frame rate alive, especially if you have a text input
> handler with some sort of validation around it. Even in latest Chrome and
> Firefox, you end up with inputs that feel slow.

Except, the author writes that there isn't quantitative data to back up this
kind of claim. It's exactly the kind of armchair performance speculation that
they're trying to refute! Can you cite a benchmark or real-world app which
exhibits this kind behavior? I'm not saying you're wrong, but the author deals
with tons of codebases and tons of developers and says that this comes up a
lot but there's never any data to back it up.

~~~
Silhouette
The author dismisses a lot of things as speculation that several years of hard
data on my own projects say can be real. It doesn't take much in the way of
large tables or lists or SVGs to be in a situation where your costs for
rendering and/or reconciliation with the real DOM are significant.

The bottleneck typically isn't the time to build the new function every time
you render the React component. The problem is that doing so can remove all of
the benefit you would otherwise get from using a pure component (or otherwise
implementing shouldComponentUpdate), because each function you pass in as a
prop will have a new identity and therefore will not compare equal to the
previous one, even if in reality it is calling the same underlying function
with the same parameters so nothing material has changed.

~~~
acemarke
Legitimate request: can you provide some actual benchmarks or examples that
demonstrate these perf issues? Part of the reason why Ryan, Michael, and
others like myself are pushing back on this is that "creating functions kills
perf!" has become a kind of "received wisdom" that people are blindly and
dogmatically passing around, yet there don't seem to be hard numbers to
actually back up what everyone "knows" to be an issue.

Ryan did also address the `shouldComponentUpdate / PureComponent` aspect in
the post already.

~~~
Silhouette
_Legitimate request: can you provide some actual benchmarks or examples that
demonstrate these perf issues?_

Not straightforwardly (due to pseudonymity and in any case client
confidentiality) but I can give you a typical example.

I've worked on quite a few web apps in recent years that have used SVGs to
draw somewhat complicated and interactive visualisations. That needs a level
of efficiency well beyond your basic "Here's a control, here's the underlying
data in the model, there's roughly a 1:1 correspondence between them" web app
UI. For example, some of the layout algorithms could take a significant
fraction of a second just to run once, so typically you get a whole extra
level of cache sitting between your underlying data and your React-based
rendering functionality. Even then, a rerender of the whole visualisation can
easily take a significant fraction of a second, just due to the number of
elements involved and the relatively poor performance of browsers at handling
large inline SVGs in various situations.

So, you immediately need to start isolating which parts of the SVG actually
need updating in response to user events, including relatively frequent ones
like mouse moves or repeated keypresses. But if many of those child components
are also going to be interactive, there are potentially hundreds or thousands
of event handlers flying around. If each type of interaction needs a handler
that has some common information, like say where your underlying controller
logic lives, and also something specific to each element, like say the ID(s)
of some underlying item(s) in your data model that are related to that part of
the visualisation, then either you get a lot of binding going on or you have
to pass in a common function and manually call it with the extra parameters
from the event handler functions for your SVG elements.

One thing you really don't want to do in this situation is have function
involved that is (a) recreated on each render of (b) a high-level component
with a lot of children that will ultimately be calling that function. Doing so
will inevitably mean that you need to rerender all such children in your viz
instead of just a very small number that might be relevant to say a particular
mouse move or keyboard event.

The thing that gets me is that in this sort of environment, whether it's an
SVG or a table or a list or any other part of the DOM that will contain a lot
of content and probably be based on a lot of different underlying data points
in your model, it's entirely predictable that you will have to focus on only
narrow parts of your rendered content in response to frequent user-triggered
events. So you know right from the start that you can't just bind your event
handling functions at the top level and pass them down through props and add
more specific parameters as you pass them into more specific child components.
That entire strategy is doomed before your write a line of code (though
unfortunately it will probably work just fine if you're developing with a very
small set of data and a very simple resulting visualisation, which can create
a false sense of confidence if you're not careful). So you might as well plan
the design for your event handling and callbacks around what you're inevitably
going to need from the start. I can't actually give you hard data on the exact
FPS we saw the first time we tried the bad way of doing this, but I can tell
you that it was more like SPF than FPS.

 _Ryan did also address the `shouldComponentUpdate / PureComponent` aspect in
the post already._

Honestly, the last parts of the article about ignoring handlers didn't make
much sense to me anyway. I can't see why you'd ever want to pass props into a
React component that weren't ultimately going to influence rendering one way
or another, and therefore why you'd ever be in a position where you could
safely ignore props for shouldComponentUpdate purposes yet wouldn't just
remove them altogether. Certainly any assumptions about event-handling on<X>
not needing to be diffed would be totally wrong in the context of a
table/list/SVG/etc. where each child element is tied to some specific
underlying data point(s) and has event handler functions supplied via props
accordingly. That looked like another of those cases where the author is
generalising their own limited experience and failing to realise that for
plenty of other situations the same assumptions won't apply.

~~~
acemarke
Thanks for the reply and the details. Let me see if I can add more context to
the overall discussion, and why Ryan wrote the article in the first place.

First, the author of the article is Ryan Florence. He's a very well-known name
in the React community, as he's co-author of React-Router, and co-owner of the
React Training company. He makes a living teaching React to developers at
different companies. I don't always agree with _everything_ he says, but he
has a _lot_ of personal experience using React, and has seen a wide variety of
ways that people are learning and using React in the real world. So, I don't
think it's fair to say that he's "generalising his own limited experience".

Second, I would say that Ryan's post isn't trying to argue that you should
never-ever worry about creating functions in `render()`. It _is_ trying to say
that that has become a blanket concern that people blindly repeat, no matter
what the actual performance concerns are for a given component, and that some
of the possible origins for that concern are likely no longer an issue with
modern JS engines.

For example, I know that AirBnB's ESLint config, which is very popular, turns
on the "react/jsx-no-bind" rule [0] by default. This rule flat-out forbids
using `fn.bind()` inside of JSX constructs, but does not take into account
anything like how the component is actually being used. A component rendered
once at the top level of an app is going to have a vastly different
performance profile than a nested component in the middle of a hot UI code
path.

I've seen learners who are still trying to grasp the basics of React being
told "oh, you should never use an arrow function in `render()`, that's slow".
Even if that's true, that's also not the advice the learner needs at that
time. The most common example of this I see is someone who wants to pass a
parameter to a callback, and they do:

    
    
        onClick={this.handleClick("stuff")}
    

That of course fails, because the click handler is being called immediately
during rendering. The simplest fix for this is to just make it an arrow
function that closes over the call:

    
    
        onClick={() => this.handleClick("stuff")}
    

Even if that isn't the most absolutely performant approach, it _works_, it's
simple for the learner to implement, and it solves their immediate problem.
They don't _need_ to be worrying about perf at that level of understanding.

Now, as you said: _if_ you do have an extremely hot and perf-sensitive UI
path, then yes, it's worth worrying about the details like this. But, the
point of the article is that the React community has gotten in the habit of
blindly repeating "DON'T DO THAT EVER!!!!!", when the reality is much more
nuanced: "Don't worry about that.... _yet_".

To specifically answer your last aspect: Let's say you've got a child
component that implements a click handler like this:

    
    
        handleClick = () => {
            this.props.doStuff(this.props.id, this.state.someValue);
        }
    

In that example, `this.props.doStuff` isn't actually affecting rendering. So,
if the parent component passes in a new function reference for
`props.doStuff`, the component doesn't actually need to re-render - the click
handler would behave the same way regardless.

[0] [https://github.com/yannickcr/eslint-plugin-
react/blob/master...](https://github.com/yannickcr/eslint-plugin-
react/blob/master/docs/rules/jsx-no-bind.md)

~~~
Silhouette
_First, the author of the article is Ryan Florence. He 's a very well-known
name in the React community, as he's co-author of React-Router, and co-owner
of the React Training company._

I appreciate that. However, there are plenty of genuine experts with low
profiles, and unfortunately there are also plenty of high-profile trainers who
have relatively little recent, deep experience as practitioners because
they're constantly jumping from one context to another. Whenever possible, I
prefer to consider writing objectively on its merits, rather than by who wrote
it.

Just to be clear on one point, though, when I wrote "his own limited
experience", that wasn't intended as any sort of insult or criticism, merely a
statement of fact. It's a huge industry, and no-one can possibly know
everything about how a library like React is being used or what experience has
been collected by the numerous other people using it in different ways.
Authors who write as if they do are a pet peeve of mine, particularly when, as
in this case, they do so in a way that implies others who might disagree are
automatically wrong.

 _It _is_ trying to say that that has become a blanket concern that people
blindly repeat, no matter what the actual performance concerns are for a given
component, and that some of the possible origins for that concern are likely
no longer an issue with modern JS engines._

I agree that we should be careful about dogma, and we should be careful about
relying on data or assumptions that are no longer relevant. These are
perennial issues in tech fields, and caution is wise.

However, I think it's also wise to remember that not everyone is running the
latest and greatest browsers and JS engines. For example, there are many web
sites used in businesses where desktops will be locked to a specific and
possibly older browser to ensure compatibility with their other in-house
tools.

 _I 've seen learners who are still trying to grasp the basics of React being
told "oh, you should never use an arrow function in `render()`, that's slow".
Even if that's true, that's also not the advice the learner needs at that
time._

There is a more general point about teaching here, I think. My own experience
from teaching a variety of fields over the years is that showing a
_simplified_ version of something is often helpful for beginners, but showing
an _inaccurate, misleading or exaggerated_ version of something can be
dangerous.

That same beginner who learns to casually build new functions by default in
this sort of situation is, almost by definition, not going to know enough to
realise when doing so in a more demanding context might be dangerous, and
there is no guarantee that anything or anyone will arrive at a more suitable
point in their development later to broaden their understanding or warn them
about any implicit assumptions behind what they learned before.

In this case, clearly I disagree with the author about there not being any
data to support concerns about passing new functions as props in this way, and
I also think he exaggerates the cost of avoiding that problem by using any of
the common alternative idioms in such cases. So I don't think it actually is a
very good idea to teach beginners that this is OK and write off the concerns
about performance so definitively as some sort of mere historical point that
no longer applies.

 _To specifically answer your last aspect: Let 's say you've got a child
component that implements a click handler like this: [...] In that example,
`this.props.doStuff` isn't actually affecting rendering._

OK, fair enough, that's a valid case. Ironically, it's also one I should have
allowed for in what I wrote before: I seem to have inadvertently edited out a
paragraph from an earlier comment about the alternative strategies we use, but
one of the specific examples I gave was having props include both a callback
function without pre-bound parameters and separately an ID value. If you can
accept the arguably undesirable dependency of having the component's own event
handling code always call the callback function with the ID as its parameter,
thus allowing both the callback function and the ID to remain strictly equal
in props from one render to the next if nothing is actually changing, then you
wind up with code essentially the same as your handleClick example, and yes,
if you're careful then you could skip checks on callback functions in
shouldComponentUpdate that were only used in that way.

------
n0us
Almost all of our functions are inlined and furthermore we use the () =>
syntax on class methods instead of manually binding them in the constructor.
I've only run into performance issues because of this once and it was fixed by
using componentShouldUpdate. This whole debate seems like one of those memes
that hits a nerve one way or the other with a lot of people but has little
bearing on real life.

~~~
always_good
We have performance memes because profiling is hard and we're often unable to
get a hint beyond our intuition.

People like to act like you just open a profiler and see actionable data every
time, but that's under ideal circumstances. In a busy application, you're more
likely getting nickel and dimed by more subtle things. And your flame graphs
stacked with React/library internals that don't point to a single problem in
your code. And you have to have the skills to recognize and verify small
performance gains when you _do_ have them.

If that stuff doesn't line up, then you're stuck with little more than
superstition and hoping other people's solutions generalize over your
problems.

------
arenaninja
We're still working on our own first truly large scale React application, and
while we haven't seen significant memory pressure from inline functions, I
still think it makes it harder to debug

I was trying to fix a focus issue on a component so I hooked in to
componentWillUpdate to see what was happening, and it was ridiculously noisy.
I added a function to diff the state and props objects and most of the time
the only changes were the inlined functions. I removed the inlining and I
could debug again as only 4 diffs occurred and they were actual props/state
changes, hooray!

It may be that there's zero performance gain from avoiding them, but when you
work with a large complex application any time you can save debugging is a
blessing. I may not avoid inlined functions wholesale now, but I will
definitely do so for complex components where they will need to be debugged

~~~
marcosdumay
That is a problem of emulating functional behavior in a language that is not
functional at its core.

In functional languages you mostly don't debug, you reconstruct your functions
in a controlled environment while you look at the intermediate results. That
works everywhere except on a tiny effectfull portion of your code, where you
have to actually debug like the imperative code that it is. In Javascript that
effectfull portion is usually all the code, and even the places that could be
reconstructed at will you will have to debug first to be sure they haven't
side effects.

~~~
arenaninja
Funny that you mention this, because part of the problem was that someone had
made a fully functional wrapper around the component. Someone else made a
second class wrapper around the functional one because the component has text
so state is needed. I wrote out the functional wrapper entirely and kept maybe
one of the functions in the class wrapper

Now I think a great many things can be made functional in JavaScript, but
purists ruin this for everyone else.

~~~
marcosdumay
We seem to be defining those words differently. From my position, it's
impossible to create a pure wrapper for an interface widget.

