
The Benefits of Server Side Rendering Over Client Side Rendering - wilsonfiifi
https://medium.com/walmartlabs/the-benefits-of-server-side-rendering-over-client-side-rendering-5d07ff2cefe8
======
ec109685
At what point do developers question whether they chose the right framework
when their solution requires a large blocking call for many milliseconds in a
single threaded server:

> SSR throughput of your server is significantly less than CSR throughput. For
> react in particular, the throughput impact is extremely large.
> ReactDOMServer.renderToString is a synchronous CPU bound call, which holds
> the event loop, which means the server will not be able to process any other
> request till ReactDOMServer.renderToString completes. Let’s say that it
> takes you 500ms to SSR your page, that means you can at most do at most 2
> requests per second. _BIG CONSIDERATION_

~~~
DougWebb
I've always found that nearly all of the time spent when requesting a page of
SSR'd content (whether that be a full page or a rendered component returned to
an AJAX request) was spent in the processing before the render happens. Given
that, the added complexity of CSR has never been worth it for me.

This is situational, I think. My work mostly involves applications that do
significant back-end processing for most requests, and my SSR is always done
using a framework that has pre-compiled code doing the rendering rather than
an interpreted language. (Perl and C#.) This combination adds a lot of pre-
render computing and optimized rendering, which adds up to SSR being a good
choice.

I'm not sure what that says about when CSR would be a good choice. If your
requests _don 't_ do much back-end processing, but still have a long (500ms?)
response time, that seems like you're doing something wrong rather than an
opportunity to use CSR. Maybe you've chosen a poorly-performing rendering
framework. Maybe you're trying to render too-large a page (which would be even
more of a problem client-side.)

~~~
ww520
While the rendering time is minimal, the rendering code still blocks to wait
for the underlying processing code, making the throughput low.

~~~
DougWebb
That doesn't make sense, unless you're talking about a specific framework that
has bad performance characteristics.

Let's say you've got a request that does SSR and takes 500ms, with 450ms of
that time spent processing the request and 50ms spent rendering the response.
If you switch to CSR, you still have to wait 450ms to process the request, and
you've got to serialize the response data (eg: render it to a more concise
format than html) which is going to take some of that 50ms you're trying to
save. So, where is the blocking you're talking about? How does CSR make it go
away?

What you wrote sounds like you're describing a singleton that handles all
rendering for all requests, and can only handle one request at a time. If
that's the case, your framework is a toy and you need to ditch it for
something that can handle multiple concurrent requests independently of each
other.

------
ChrisSD
CSR bugs me most on mobile. What should be simple content, like a reddit page
or a tweet, leaves me staring at a large pulsing logo for awhile as the whole
app is loaded and content is rendered.

I get offloading some rendering (that server time adds up) but meet me half
way. At least show me some content while the rest loads.

------
ssahoo
Almost everything was on server side rendering 5years ago. Then angular
appeared.

I have seen an application login page size 5mb, full of js. Because it
downloaded the full application to the client before rendering the login page.

~~~
tootie
Early attempts were pretty ham-fisted and only really useful for backoffice
web apps with captive users and LAN connections. The past few years have seen
a lot of advancement in tools like webpack to streamline how much code gets
served for a given page.

~~~
angersock
And for a login page with what is effectively a form element with two input
fields that IS is overkill.

~~~
tootie
Well, what was likely happening is that the entire application was bundled
into a giant JS file that was served up on every page regardless of how much
of it you needed.

------
tetha
Hm, any thought on the performance/throughput consideration?

I'm not experienced with Node.JS, but something hogging the main event loop of
an asynchronous server with a long computation sounds horrible. It'll mangle
your throughput, and it'll create some really strange latency spikes for
unrelated requests, because the slow computation blocked the event thread from
delivering the response of another request. Is this not as much of a problem
practically, or would you isolate that SSR code from REST code in different
application instances?

In a threaded server like java application servers or go applications, that
should be a non-issue since requests are handled in mostly independent
threads. It might increase the necessary compute resources some, but that's
expected when moving work to the server side.

~~~
moltar
Keep in mind that only rendering blocks, not the entire request. Your database
calls and other stuff is still non blocking. The actual rendering time is
measured in tens of ms in my experience, but then I’m not Walmart. But at the
same time I haven’t seen any pages in Walmart that render for half second.

I think you are doing something wrong if your tender time is more than a few
ms.

------
matthewmacleod
I tried—I really did try—to use server-side rendering for every project I've
worked on. I find it really unpleasant when I hit a site, like a blog or
something, which has almost entirely static content and yet it still rendering
it locally. The experience is objectively worse, and I had no interest in
making that the case for my users.

But there's a middle ground in practice. Public-facing website that users
might hit from a search engine? Obviously render it on the server, and if
required progressively enhance it. Something more akin to a web application?
Server-side rendering makes everything slower and more complex without
benefiting any users. If you aren't using Node on the server, it's even more
complex.

I really hate that this is the case. I had this firm idea in my head – "every
URL is an HTML page, and users should be able to take that URL and request it
using Curl or whatever, and see the content of the page". In practice, we were
developing a highly-interactive, domain-specific application that relied
heavily on client-side scripting to be realistically useful for users. There
were no users without JS that I was able to find. We were doing a huge amount
of work to follow a rule of progressive enhancement, that in practice slowed
down the experience for users and, benefited nobody, and made development ~ an
order of magnitude harder.

~~~
hartator
Can you elaborate? My understanding is just having the initial version of your
components server-sided rendered, not all of it. Then JS takes over when it's
loaded. It shouldn't change much to your existing code.

~~~
matthewmacleod
Yeah, in theory it's pretty simple. If you're using Javascript on both the
server and client, it's definitely easier. But I guess I'm conflating server-
side rendering with "pages that don't require Javascript to work", which is
definitely more work.

------
codedokode
I don't really see what are the benefits of using React in an internet shop.
SPA are good for interactive and complicated interfaces, but an internet shop
is mostly a set of static pages - product lists, search and product pages. I
hope they don't use React for the static part of the page and only use it for
rendering the cart and other interactive parts.

------
JeanMarcS
> SSR TTFB(Time To First Byte)is slower than CSR

Well, it of course depends on what your content is, but you can still use a
cache service before.

I installed an infrastructure for an e-commerce for a client. They use
Magento. I installed a Varnish in front of it and it goes really faster than
without. Most pages are, in fact, static pages (home, categories or products),
the dynamic part are ajax loaded after content rendering (like prices or
stock) so it's a blast for UX.

Of course it will make no (well, less) sense for a SPA. But for content
websites, I think it's more logical to do SSR than force to load the whole JS
files to see what's on the page.

Edit : typo

------
unleashit
I wonder why the article doesn't mention renderToNodeStream which is new to
React 16? That allows streaming the rendering to the browser. Still blocking
for that thread, but probably a better UX than renderToString.

------
migueloller
> SSR throughput of your server is significantly less than CSR throughput. For
> react in particular, the throughput impact is extremely large.
> ReactDOMServer.renderToString is a synchronous CPU bound call, which holds
> the event loop, which means the server will not be able to process any other
> request till ReactDOMServer.renderToString completes. Let’s say that it
> takes you 500ms to SSR your page, that means you can at most do at most 2
> requests per second. _BIG CONSIDERATION_

If your `renderToString` is taking 500ms you should really consider using
`renderToNodeStream` [1]. It will significantly reduce TTFB and runs
asynchronously, letting your Node.js server handle more incoming requests.
This [2] blog post goes into more detail.

Also, if you don't want to use streams, Rapscallion [3] provides an
asynchronous alternative to `renderToString`.

[1] [https://reactjs.org/docs/react-dom-
server.html#rendertonodes...](https://reactjs.org/docs/react-dom-
server.html#rendertonodestream)

[2] [https://zeit.co/blog/streaming-server-rendering-at-
spectrum](https://zeit.co/blog/streaming-server-rendering-at-spectrum)

[3]
[https://github.com/FormidableLabs/rapscallion](https://github.com/FormidableLabs/rapscallion)

~~~
pjc50
> `renderToString` is taking 500ms

What baffles me is that this is a _long time_. In that time, you've got about
a billion CPU instruction executions. If you include GPUs, they can render
multiple frames of complex scenes to million+ pixel buffers in that time. And
people are having trouble rendering a string?

~~~
tetha
Hm, I'd guess, it's latency to backend servers. Cores are plenty fast for just
about every language.

If you have a good network between systems, each request of any form to a
backend service will introduce at least 0.5 ms of response time due to
datacenter RTA, so once you hit 250 queries, that's that. And 250 queries are
just one n+1 problem somewhere.

~~~
syrrim
The OP says the call is CPU bounded, which suggests that even if time is spent
on the network, it is a minority of the total time.

------
aleksei
For dynamic content it may make sense to offload computation to the client
(although in the linked article the opposite was found to hold), but it really
irks me when otherwise static pages are rendered again and again on the device
of every visitor. Such a waste.

