
React.Js: Achieving 20ms server response time with Server Side Rendering - ateevchopra
https://ateev.in/react-js-achieving-20ms-server-response-time-with-server-side-rendering-1ea80e420d88#.kpe4d4xgb
======
btown
What this article _should_ have linked to:
[https://github.com/walmartlabs/react-ssr-
optimization](https://github.com/walmartlabs/react-ssr-optimization)

> After peeling through the React codebase we discovered React’s
> mountComponent function. This is where the HTML markup is generated for a
> component. We knew that if we could intercept React's
> instantiateReactComponent module by using a require() hook we could avoid
> the need to fork React and inject our optimization. We keep a Least-
> Recently-Used (LRU) cache that stores the markup of rendered components
> (replacing the data-reactid appropriately).

> We also implemented an enhancement that will templatize the cached rendered
> markup to allow for more dynamic props. Dynamic props are replaced with
> template delimiters (i.e. ${ prop_name }) during the react component
> rendering cycle. The template is them compiled, cached, executed and the
> markup is handed back to React. For subsequent requests the component's
> render(..) call is short-circuited with an execution of the cached compiled
> template.

~~~
jotto
> After peeling through the React codebase

Except you don't have to peel through their code base or intercept any calls,
they expose a clean interface for this: [https://github.com/facebook/react-
devtools](https://github.com/facebook/react-devtools)

You can capture it like this:

    
    
      window.__REACT_DEVTOOLS_GLOBAL_HOOK__.inject = function(obj) {
        // obj.Mount
      }
    

Which is how we're doing SSR at
[https://www.prerender.cloud/](https://www.prerender.cloud/) for compatibility
across React versions.

~~~
spicyj
Please know this is not supported.

~~~
jotto
Not supported, as in, the react-devtools that rely on it could break at any
moment? Or just that you don't officially document the API?

~~~
spicyj
The latter – and it is likely that we will change this API in the future and
update the devtools to match. No guarantee that your monkeypatch will continue
to be possible, especially as we rewrite large parts of the internals
([https://github.com/acdlite/react-fiber-
architecture](https://github.com/acdlite/react-fiber-architecture)).

------
misterbowfinger
The headline should've been "it takes 150ms to do server-side rendering with
React," not, "hey everyone I can put something in Redis." I would _not_ take
this person's advice. There's most likely something else really wrong with
their code.

Also.... 20ms with caching the pages in Redis? That sounds really, really
slow. There's definitely something else going on.

~~~
andrewingram
150ms is pretty standard server-side rendering React for a relatively simple
page, but it scales pretty predictably with the complexity of the component
tree. Obviously there are some caching strategies available, but I think the
only real solution is going to come from serious investment in the problem
from the React side. I can't see the core devs tackling it, because Facebook
doesn't require it, but maybe the community can step in.

~~~
flukus
Really? Because that is absolutely awful compared to other tech stacks, even
other tech stacks from more than a decade ago.

~~~
andrewingram
That's because React isn't designed for server-rendering. It's able to do it,
but if you expect the same performance as as traditional template engines
you're going to have a bad time.

~~~
flukus
So why are people using server side react then? Is it any faster client side,
because that would also be very slow?

~~~
mercer
One benefit is that you can render server-side and seamlessly take over on the
client-side without needing a re-render.

Another benefit is simply React's component-based approach itself. I once went
as far as using React for wordpress templating just because I prefer its
approach to the standard WP <?php echo $blah ?> shit. I wouldn't recommend
that to others though.

------
prashnts
> For this, we created a small param cache=false. Whenever the url is hit with
> this URL, the node server makes and API call instead of fetching data from
> the redis cache. And hence the cache is updated with the newer data.

> Whenever you deploy, new chunk hash for js and css files gets generated.
> This means if you’re storing the whole HTML string in the cache, it’ll
> become invalid with the deployment. Hence, whenever you deploy, the redis db
> needs to be flushed completely.

These arguments sound like over-engineered "solutions" by somebody who did not
do their homework. To speed up your SSR, render components, not the _whole_
page. Vast majority of your page is going to remain constant: the head,
navigation, footer. Rendering these components in partials will save you the
Redis store by a very, _very_ large margin. Once they start caching hundreds
of thousands of HTMLs, the Redis server will start swapping the content from
memory to the disk. There, you just defied the whole point of using Redis.

Also, your users are _still_ going to see the 810ms latency the first time
they access your service. How often do you think they'll be reloading right
after the page is loaded? And once the cache is invalidated -- which I suppose
would happen frequently -- the _visible_ latency is still high.

~~~
ateevchopra
> To speed up your SSR, render components, not the _whole_ page.

That is totally wrong. Rendering components again and again is not performant
at all. "Rendering components, storing them somewhere and finally when user
requests, stitch them and give it to the user." Is that your solution ? That
would be even more expensive than normal rendering.

> your users are _still_ going to see the 810ms latency the first time they
> access your service

That is not true too. Only one user will see the 810ms latency. All the users,
when access this page, will get sub 20ms response time. Have you ever worked
with servers/caching and know how they work ?

~~~
prashnts
> "Rendering components, storing them somewhere and finally when user
> requests, stitch them and give it to the user." Is that your solution ? That
> would be even more expensive than normal rendering.

String concatenation/substitution is more expensive than DOM parsing?

> Only one user will see the 810ms latency.

If your whole content is cacheable, then a CDN will be far more useful and
have way less than 20ms response time. 20ms is not fast. Selectively
invalidating CDN content as new data is available is much more performant.

> Have you ever worked with servers/caching

Yes.

> know how they work

Yes, still learning though.

~~~
ateevchopra
> String concatenation/substitution is more expensive than DOM parsing?

Not 100% sure. Theoretically saying, if you have 100 components in your page,
you'll be hitting the cache store 100 times for all components. That's not
performant at all.

> If your whole content is cacheable, then a CDN will be far more useful and
> have way less than 20ms response time. 20ms is not fast. Selectively
> invalidating CDN content as new data is available is much more performant.

Yes. Thanks for the suggestion. We are experimenting with that :)

------
mgallowa
I thought I was going to learn how to speed up my SSR, instead I learned about
how to hide from my problems by using Redis. I am disappointed.

~~~
ateevchopra
Sorry to disappoint you, but sometimes simple solutions solve big problems :)

------
sciurus
At the point where you're able to cache the exact HTML you're returning to the
client, it's more efficient to do that in front of your node server than
behind it. A cache hit can avoid your application completely, which is a real
boon for performance and scalability. You can run your own caching http proxy,
like varnish, or use a CDN like Fastly.

~~~
ateevchopra
Yes totally agree. Actually we are doing some experiments with that too. We
chose redis for it's easier and simpler integration and highly programmably
controls.

------
wmf
This makes me wonder how many people are using AJAX as an excuse for slow
response times. If >500 ms isn't acceptable for a full page load then it
shouldn't be acceptable for an AJAX REST call either. A page showing a spinner
instead of content isn't any more useful to the user than a blank page.

~~~
abritinthebay
Agreed! That said - a lot of people don't realize that Amazon's services can
be _quite slow_ and think they can always throw it on there and it'll be fast.

They're often high latency (for a service - 180+ms) but are also high
availability.

Slap a CDN in front (like Fastly) and get in the 10s of ms response time if
possible.

------
yazaddaruvala
If your HTML is cacheable: You just need to set your HTTP header's
appropriately.

You can also invest in a CDN. Now we have a React.js SSR with 0ms server
response time! :)

------
jeffnappi
TLDR; Caching improves performance.

------
schmrz
I'm quite interested in learning why the author decided to use React for the
user-facing part of the blog in the first place.

~~~
ateevchopra
Hello ! Schmrz. I wrote about it a while ago. [https://ateev.in/why-
react-231af9e73d9a](https://ateev.in/why-react-231af9e73d9a)

------
merb
> 150 ms for server side rendering

that's really really really bad for generating a html (and even sending it to
the user).

I can achieve the whole damn thing in 20ms or less.

> The average response time fell to 20 ms !

yep in java/go/whatever you don't need the cache, with it your avg response
would drop even further!

~~~
flukus
A decade ago I was on a project where we aimed for 30ms with database calls
and all. For most of the app we got there with a few pages blowing out to
50-60 and 1 or 2 being a bit more. Even that was on hardware not flash for the
time.

------
petetnt
The title reminded me of react-dom-stream[0] which is accompanied by this
great talk[1]. Went back to see if some parts of it landed to the core[2] but
it hasn't seen any progress as of late. Oh well.

[0]: [https://github.com/aickin/react-dom-
stream](https://github.com/aickin/react-dom-stream) [1]:
[https://www.youtube.com/watch?v=PnpfGy7q96U](https://www.youtube.com/watch?v=PnpfGy7q96U)
[2]:
[https://github.com/facebook/react/issues/6420](https://github.com/facebook/react/issues/6420)

------
benguild
This article mentions SEO.

As far as I know, as long as the content is embedded in JSON/JS when the page
loads, it's fine to then "render" it with JavaScript. It's 2016 and Google
started crawling JS websites a while ago.

However, if you fetch it with AJAX after the page loads, Google won't see it
because it doesn't necessarily follow AJAX calls nor wait around for them to
return in 810ms. They'll most likely only render the bundled content.

You can use the "fetch as Google" tool in Google Webmaster Tools to try this
out for yourself.

------
agnivade
Soo .. the whole point of the article was that you used Redis to cache your
html response and got an improvement.

Umm .. nice work ? I guess ..

------
petercue
So... we're back to server-side rendering?

~~~
mixedCase
Web development: Everything old is new again.

I wonder when using shell scripts and/or Makefiles instead of
Webpack/Browserify/Gulp/Bower/NPM/whatever for building will become popular
again.

~~~
flukus
Funnily enough, I only recently discovered and started using make. I started
in the "everything is xml" era of build tools and was unsatisfied with
many/all of the newer ones so I tries make. Wish I'd done it years earlier.

All the tutorials seem to focus on using it for c++ projects with all of their
complexities when it's really simple for a a typical java or c# project.

------
amelius
Nice, except, I suppose, if you have a system where the HTML ultimately
depends on the size of the window.

------
angry-hacker
Why on earth react for a blog..?

------
geggam
Please tell me all the horrible javascript is going to be rendered server side
soon

My browser would thank you

