Hacker News new | more | comments | ask | show | jobs | submit login
Rendering React without browser JavaScript (medium.com)
67 points by firasd on Apr 9, 2016 | hide | past | web | favorite | 55 comments

We need to get back to server-side cached rendering. Thinking mobile-first and lightweight websites. I feel in the last few years we've added so many things and we are no thinking on the people with slow internet. Ember FastBoot now this, Angular 2 coming with it. We are going towards hybrid awesome front-ends which load fast and deliver the best UX we've had. Thank you for the great effort.

For sure. I was always uncomfortable with JS frameworks that used markup with no content from the server. In my professional experience I've also found that developers underestimate how slow Ajax requests are in real world scenarios. I wrote some comments on this when I started learning React, just gonna screenshot them instead of pasting: http://i.imgur.com/XSnCqjK.png

Two great articles I reference in this tutorial:

1) http://tomdale.net/2015/02/youre-missing-the-point-of-server...

2) http://rauchg.com/2014/7-principles-of-rich-web-applications...

I am curious how this would change with the multiplexed pipe of HTTP2 where doing a lot of requests is not as expensive. I did a couple little tests a while ago with nginx 1.9.1 mainline, and while the results did show a little performance improvements in figures, they weren't observable - but this could be because I didn't put in much care while doing those tests and I probably did some mistakes.

Anyone else has done good benchmarks on real world single-page-apps which do a lot of back-n-forth communication with the server with http2?

I don't expect things to get suddenly extremely better (specially since resource fetching is only one of the bottlenecks), but maybe it could bring a mid-way between the server vs client side rendering war.

HTTP2 will help, but it'll also make server-side rendered pages load even faster, before you load all of your client-side JS.

I agree, to a certain extent, but mostly for the initial load. I remember reading an interesting article about how twitter does exactly this with "time to first tweet" being the defining metric. Beyond that, I think stateless services/web sockets/DOM-manipulation would best serve users who have a need for real-time updates and dynamic content.

For reference: https://blog.alexmaccaw.com/time-to-first-tweet

I love React, but it seems like it would be easier and more productive to use a toolchain that had a much smaller payload (like Mithril, or React-lite), than engineering a solution like this.

If performance and time-to-click was your concern.

I'm consistently amused by the amount of learning, tooling and dependencies needed to get these things working (and not very fast at that). Accomplish the same [1] in 15.5k (min): actually fast [2] vdom view layer, router, isomorphism and observers. IE9+ with an additional 9.5k of polyfills [3].

Disclaimer: i'm the author.

[1] https://github.com/leeoniya/domvm#isomorphism-html-attach

[2] http://leeoniya.github.io/js-repaint-perfs/domvm/

[3] https://github.com/leeoniya/domvm/blob/1.x-dev/dist/polyfill...

Loving it, thank you. Had not run into your repo before. Feels incredibly tight, logical, and easy to learn.

I love it, but what about React Native. How do you match that?

If you want React Native, then you need React :)

From what i gather, React Native is good enough for simple things and is iOS-first, but quickly fails to live up to the hype for anything moderately complicated and is actually slower than apps that are native from the get-go. I'm not sure if this is still the case, but aiming for the lowest common denominator seems like a lot to give up.

I would be interested to see a React Native app that truly benefits from being native, aside from the app store and distribution aspect.

Reminds me of Ember FastBoot. Different framework, different use case, but relatively easy to implement.


Not "relatively easier"... quite literally just two commands! The people behind FastBoot are amazing.

I love how Ember keeps borrowing the best ideas from the other frameworks and evolves in a pretty responsible way.

This is cool! It's worth pointing out that, for SEO reasons, Google now renders Javascript pages [1] without methods like this. The performance benefit is definitely still a plus though.

[1] https://webmasters.googleblog.com/2014/05/understanding-web-...

Google has been saying that for years, but I don’t really believe it…

1) First there were the "hashbang" URLs, that Google said they'd crawl fine, but that did affect Gawker's SEO: http://www.webmonkey.com/2011/02/gawker-learns-the-hard-way-...

2) And instagram uses server rendering for SEO: https://twitter.com/cpojer/status/711729444323332096 (via http://jamesknelson.com/universal-react-youre-doing-it-wrong...)

They've been working on it for years, and improving it. They've only recently started claiming that their javascript rendering was ready for "prime time".

The "hashbang" URLs (they call them Escaped Fragment URLs) actually required you to use server-side rendering, that had nothing to do with the spider's javascript rendering engine. I used them on a few sites and it worked pretty well, but I have heard of mixed results on sites that weren't originally on hashbang URLs who transitioned to them. They don't recommend you use these anymore.

> "hashbang" URLs (they call them Escaped Fragment URLs)

why not use the RFC terms? https://tools.ietf.org/html/rfc3986#section-3 "url fragment"

Also, it won't help you on server side rendering since the server will never even see it on the request.

>>why not use the RFC terms?

Because it's specifically different, on purpose. The RFC doesn't cover the "bang" part....the "!".

>>the server will never even see it on the request

Explained on the (now deprecated) page from Google on this "hashbang" approach:

*"The answer is the URL that is requested by the crawler: the crawler will modify each AJAX URL such as www.example.com/ajax.html#!key=value to temporarily become www.example.com/ajax.html?_escaped_fragment_=key=value"

> Also, it won't help you on server side rendering since the server will never even see it on the request.

It will when Googlebot sees the link and rewrites the URL.

A hashbang is specifically a fragment starting with a ! character, hence "hashbang".

yes and no. Yes they can do it, no they don't do it for all websites because it's slow and expensive. I'm still waiting for googlebot to figure out a simple window.location = ...

In conclusion, if you rely on googlebot reading your javascript: 1. for a vast majority of websites it will never happen 2. for popular websites it will happen but not as often.

Also, interestingly, this is a new way Google could push Angular: just make sure Angular powered websites don't suffer from this...

I was hoping this would be a story about native react-style rendering within the browser engine itself, something I've been waiting for since virtual DOMs first started becoming popular.

I feel like the problem being solved by React and other virtual DOMs are only relevant because user agents never bothered to solve janky page transitions. Couldn't browsers do a minimal DOM update themselves without losing context or introducing weird flashes? Is this what you mean by "react-style rendering within the browser engine itself"

I still remember your Conj talk in 2012 about virtual dom, 6 months before react was announced. Where did you first get the idea?

I was hoping it would be a story about injecting links/buttons for each possible state transition on a page, that would cause a full browser refresh from the server.

Not quite sure what you mean? If you turn off JavaScript in the example app you do get something like that—the links reload the browser page, and the comment form submits the comment, then renders the new state of comments (just by processing the comment on the form submit and then re-directing you back to the page.)

Something more universal (albeit less useful). i.e. something that automatically checks your rendered DOM for every possible Redux state transition, creates a callback URL for it, and sticks a link for that at the bottom of the page. Essentially, distributed redux: if redux is (state, action) -> state, then that over HTTP.

Every time I see someone mention react and other frameworks are bad for SEO, I cant help but think, shouldn't search engines have to worry about this.

Google and other search engines have brilliant people working for them who can create systems which can guess what I want to search before I have entered the entire query, and you are telling me these same people can't do something about pages using the latest technology. Sigh.

Google has been working on this (I don't have deep knowledge of whether their solution is adequate for SEO. I just found this article that suggests it works fine: http://searchengineland.com/tested-googlebot-crawls-javascri...)

But I think, while we're making assumptions about the kind of technology good engineers should create, let's also ask Google-led Angular and Facebook-led React (and similar projects): if your library can manipulate HTML and the browser DOM, even while using async network requests to load data, why can't it take that same data and code to provide an HTML string upfront, as a "First Render"? This should be the standard way to do it. If React can re-render an input box every time I type a character, and not just generate HTML but also directly manipulate the DOM to reflect it, surely it can generate HTML for the input box upfront. (I don't know enough about the internals of these libraries, but maybe the fact that React is declarative makes this easier than it is for other libs.)

Here is another blog that tested it and concludes it works fine on Google: http://www.centrical.com/test/google-json-ld-and-javascript-...

I don't think Bing and Yandex are supporting this yet though.

There's a followup to that article where they specifically look at Ajax and conclude that there are still issues:


I think what made the web so successful is how an html document can be easily statically analyzed. Every action (state transfer) a web page is capable of is statically encoded in the page itself. That is, encoded through <a>, <form>, <input>, etc tags. This is the essence of the REST architecture. An algorithm analyzing a web page doesn't have to guess at what actions are available, or run arbitrary javascript in a sandboxed environment to figure it out. I think it's bad for the web as a whole to abandon this architecture. Web developers need to take responsibility for the web as a whole, and not just violate the webs fundamental assumptions and expect others to create complex, clever solutions to a manufactured problem.

>> I cant help but think, shouldn't search engines have to worry about this

I don't think anyone disagrees with this, but production sites would then have an incentive to wait until search engines have figured it out. Or otherwise suffer the traffic loss.

Google's 2014 summary of how well they do with javascript reads like "we just got this working, but there's lots left to do": https://webmasters.googleblog.com/2014/05/understanding-web-... . The fact that they went as late as 2014 before even trying doesn't inspire confidence.

Great timing. I was just going through the referenced "Tutorial: Handcrafting an Isomorphic Redux Application (With Love)", having gone through the React tutorial the day before. When I first saw your article, I thought it was nothing more than a rehashed version of the "With Love" one. But as I got to the end of "With Love" I wondered:

# "With Love" uses an external API server; how would I handle API requests on the same server?

# There's no real demonstration of how to do routing beyond a single route.

# "With Love" is unfinished, with no live updates etc.

# How to integrate even simplistic file-based persistence?

The Facebook tutorial had all that, but I wasn't sure how to integrate those features into a server-side-rendered framework. So essentially my next step would have been to figure out how to do what you just did here. The extra work of making the application functional (beyond just showing the pre-rendered screen) is icing on the cake.

Repository for modified tutorial: https://github.com/firasd/react-server-tutorial

[ed: I got a bit carried away, I should've prefixed this with a big thank you for a nice and concise intro to setting up a sane reactjs environment! It's refreshing to see live reloading of comments posted from the console by w3m show up in Firefox!]

Ouch, any chance that these examples could be dual-lincenced under cc0/public domain (where applicable) and something like MIT/BSD/APL in jurisdictions without public domain?

Because, frankly examples that are: "The examples provided by Facebook are for non-commercial testing and evaluation purposes only. Facebook reserves all rights not expressly granted." are worse than useless. If you end up using anything from the examples in production, you're in breach of copyright!?

How would you render (to a cache) server-side if the rendering depends on the display size in a complicated way (too complicated for CSS to handle)?

Edit: yes, I understand that rendering means generating HTML, but sometimes the HTML depends on display size.

I suppose it depends on how complicated the logic is behind your conditional HTML. If there is an initial JS payload that checks the screen size, I might still try to get an HTML string back from the server after checking the size. But there are certainly diminishing returns to server-side rendering if you're making multiple HTTP calls anyway.

Rendering means generating HTML in this case. The HTML rendering happens in the browser.

full circle?

Was going to say the same thing. Depressing. Most js devs probably don't realize the irony.

love how your comment was downvoted to oblivion and yet, the thread went on and on.

the irony indeed.

We're back to aspnet web forms and the update panel, but with a less terrible API.

How is this full circle? Traditional server-rendered pages don't support the ability to navigate to new pages and fully interact with the application without refreshing.

Traditional server-rendered pages would prioritize serving content first. Then they would sprinkle in some javascript to provide dynamic features(like navigating to new pages without refreshing). This is exactly what "isomorphic javascript" is providing now, in a very roundabout manner.

> sprinkle some javascript to provide dynamic features

I've never seen any added javascript to a page that went as far as generating and rendering other pages completely, that's always been a refresh. If it's not then that's literally the same thing as a SPA, and is actually isomorphic javascript.

The "sprinkled javascript" that your talking about was basically used to do some simple DOM manipulation and a few XHR requests. It is not the same as a full single page app that is bootstrapped from server-side markup.

> I've never seen any added javascript to a page that went as far as generating and rendering other pages completely, that's always been a refresh.

pjax[1] does what you are describing. There are other similar libraries as well. On user navigation, it makes a XHR request to the server, and updates in place any changed DOM elements. I like this, because it progressively enhances the user experience. An application should focus on serving content quickly to the user, then improving the experience where browsers leave a gap. Notice that pjax provides the same functionality as an isomorphic javascript without inflicting upon the world server side javascript ;)

[1] https://github.com/MoOx/pjax

I just remembered this gem from the react community! react-magic [1] intercepts navigation and form submission, makes a XHR request, parses the response into JSX, and uses react to update the modified dom elements. I just tested it out on one of my projects, and it had a few issues. I wish this were production ready, because it seems like a very easy way to enhance a traditional server side rendering application.

[1] https://github.com/reactjs/react-magic

Ah, I stand corrected! I've never seen pjax before.

That said, this project was created in February 2015... It's not that the "Traditional server-rendered pages" approach had been doing this for decades and isomorophic javascript simply reinvented the wheel here... this is a new project.

I shouldn't have claimed that you need full javascript to do this, that was clearly wrong, but my point is that isomorophic javascript is not the same as the common traditional server-rendered pages used before javascript single page applications came around.

Ideally there should be a way to render JavaScript on the server side for views/templates, like React, without having to write your whole backend in JS. In fact I just googled and found something like this in the reactjs commmunity repo: https://github.com/reactjs/react-php-v8js

One easy solution is to have a server side application that just serves the view layer. That bit can be JavaScript, then your API can be whatever language you want.

Ha, just the other day I set up a wordpress site with react as the (server-side) view layer using v8js. Makes development a lot more fun.

Interesting. I definitely want to experiment with something like that. Most examples I've seen of a WP theme in React (e.g. https://github.com/Automattic/Picard) use the new WP API and a node.js app. So there are (at least) two potential approaches to this.

Yeah, I considered that, and in some cases might take that approach.

But in my particular case the websites need to be server-side for SEO, so it's mostly just that I want to use React for reusability and because I prefer it over wordpress theming with php tags.

Furthermore, I can now use React and still deploy the site on a typical LAMP-stack cheap hosting!

Isn't that basically what Rails Turbolinks do?

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact