Hacker News new | past | comments | ask | show | jobs | submit login
JDK8 + Facebook React: Rendering single page apps on the server (augustl.com)
118 points by augustl on March 22, 2014 | hide | past | favorite | 29 comments



> I would imagine it's possible to greate "progressive enhancement" stype React components, so that you'll yield virtual DOM for a plain input box, and somehow enhance it with a auto-complete JS.

You can put initialization code that needs to be done both in client and server in componentWillMount. Code that needs to be executed only in the client (ex jQuery integration) should be in componentDidMount which is only executed in the client.

I'll make sure it is documented. Edit: https://github.com/facebook/react/pull/1288/files


Thank you for that. I was about to ask whether react on the server + react on the client is possible (instead of exclusively server or exclusively client).

Is there a tutorial targetting such scenario specifically?


Speaking of reactive DOM and JVM languages, this [1] impressed me greatly about a month ago. It's using Scala.js which is coming along fast w/ Scala.Rx and ScalaTags. I found it's a great way to share rendering code between the server side and client side. I was also impressed w/ the readability and small size of the TodoMVC app [2].

1 - https://groups.google.com/d/msg/scala-js/DtRyjfD6qqA/Bfd2pDC... 2 - https://github.com/lihaoyi/workbench-example-app/blob/todomv...


It is not necesssary to execute JavaScript on the server-side, or to use React.js at all (although it is a pretty neat tool.) For instance in Backbone, you can call setElement() on any view to replace its on-screen element with one that you've pre-rendered. If you're clever about templates this means that you can have views that both know how to build themselves up from zero, and can also attach to existing views on screen so you can save yourself that rendering time when you visit the page.

This could result in code duplication, but the end result is the same: a complete state of the application can be rendered server-side and then can evolve independently on the client side.


How are you going to make Backbone calls without Javascript? I believe the author's goal was to allow his single page web app to be properly indexed by search engines (which don't execute javascript) without having to change any of his code.


What he's saying is that the views you want to be SEO'd can be generated server side and bootstrapped into Backbone (hook backbone views into an element that is rendered server side).

I currently do this too - I'll add that it is helpful to use a templating language that will work both server side and client side so you only write your templates once.

What does become a pain is doing routers/controllers both client side and server side - end up writing some code more than once and in different languages. Harder to maintain, especially when changes are made.


Right, and if you want to generate them server side, you either need to have separate templates for server side and client side, or you can execute the javascript on the server (which is what the OP decided to do).

Noone is saying there aren't other solutions to the problem, but I think the OP's solution met his goals pretty well and I thought it was a neat concept.


> you either need to have separate templates for server side and client side

there exists templating engines that run on both server and client - for example, google closure templates.


You can use cross-language templating options - personally I use mustache.

No disagreements on multiple solutions - definitely a cool approach.


So, in the past I achieved what the parent was talking about by using a templating engine that could be rendered natively by my server side language and by js. In this case it was moustache. With a little config glue my templates could be referenced from both the browser js (in which I called them in by backbone render method) and in rails (when they were rendered as partial calls in my views)


The nice thing about using React server-side rendering is that you only have to to implement your rendering logic once and run it in both client and server. While you need to take a little care (client-only code goes in componentDidMount, only require client only libs in client-only scope) it's fairly painless. I have my Python app return json and pipe it through a node subprocess for the first page load.


    If all your content is generated by a single page web app
    that downloads data and executes JS, your site won't get 
    crawled at all - no popular search engines executes JS.
GoogleBot does actually execute at least some javascript: https://twitter.com/mattcutts/status/131425949597179904 http://googlewebmastercentral.blogspot.com/2011/11/get-post-... and http://www.jefftk.com/p/googlebot-running-javascript


It's not worth relying on IMO. We had some pretty straight-forward backbone views populated by AJAX content in production, and it was never crawled. In general it's very unclear what capabilities the crawler is capable of.


This is sort of a tangent question but is relevant to Nashorn and server side JavaScript execution. Is there a standard practice for sandboxing server side JavaScript execution?

I'm asking because my product has what I call "Smart Attributes" which are basically programmable context aware metadata. These "Smart Attributes" are designed to be attached to a Git commit, GitHub pull request, diff lines, etc. and can be programmed in JavaScript to react to what it is attached to.

In the beginning I gave the user the ability to execute their Smart Attribute scripts on the server side but I eventually removed that option because I didn't have the time to fully think it through. Basically I was paranoid about missing something that would allow them to do dangerous things on the server. This was a couple of years ago and I was using Rhino from Mozilla.

Now that Nashorn is available, I've been thinking about JavaScript server side execution again, and was wondering if there were any best practice sandboxing methods and/or libraries. If anybody knows of any good documentation or libraries for sandboxing in Nashorn, I would love to hear about it.


> Serving HTML from the server is great for search engines.

Google and the other search engines should go with the times and start to render and index single page js applications just like they index html pages. Executing js on the server and get the resulting output is a solved problem. JS is enabled by default on all browsers that matter (and very difficult to disable), I don't understand why we still have to play nice with search engines and render the html on the server as well just for them.


Google is already executing js: https://www.google.com/search?q=google+executes+js


> Google is already executing js

Any idea then why people keep saying¹ that "Serving HTML from the server is great for search engines"?

And why should we use this² when google is perfectly capable of executing the js and get the html output all by itself?

¹[citation needed]

²https://developers.google.com/webmasters/ajax-crawling/docs/...


That link describes a method for translating a URL where a static version of a dynamic page can be found. So you still have to serve and generate static content yourself. So we're not quite there yet. But seems like at least Google has started executing _some_ JS. There are more search engines than Google though :)


Time till the page is usable comes under consideration. You can load the relevant html then begin the bootstrapping of the app.

If I recall correctly, Twitter had this problem. The size of their apps + templates bloated to 2MB or so - making that initial page load pretty bad.


Has anyone compared Nashorn vs nodejs (V8) yet?


Nashorn seems to be in SpiderMonkey territory at the moment, http://wnameless.wordpress.com/2013/12/10/javascript-engine-..., which is not too shabby. I know that a lot of performance work is being done so it may approach V8 in time.


It would be great if Nashorn could be used as part of a headless browser but I think that's still a long way off it it ever comes to pass.

Browsers have become extremely complex beasts. It is very difficult to simulate one using anything but the original browser code. So my bet is still on phantomjs.

I firmly believe that search engines will have to execute JavaScript as well (if they don't already do it). I think this is just Google's attempt to put off the inevitable for a little while longer: https://developers.google.com/webmasters/ajax-crawling/docs/...


Its just for expanding templates, I thought simple javascript processing would do, not browser related.


Nice article! We're currently also coding a single page Web app with React, and we're planning to do the same. I wonder, though, why do the server side part in the Jvm and not node? My idea was to use the same API-calling code on the client and the server, and just have a Node instance that lives on the same server as the backend make those Http requests when serving the first page.

Any reason not to do this? (we're on Mono, not JVM but there's plenty of C# JS engines too


Calling out to Node is certainly not the end of the world :) I just wanted so show how convenient it is to do when you're on a JVM anyway. Only actual benefit is the API part, i.e. not doing remote calls, just invoke your API in procrss.


Nashorn looks like it provides a very nice bridge. It looks like a solid bet if you want to interact with JavaScript without using JavaScript itself.


The ultimate software architecture: http://c2.com/cgi/wiki?AlternateHardAndSoftLayers


does this mean xss will run arbitrary code on the server :)


XSS works because the browser uses "eval" for some attributes, for example <img src="x" onerror="alert(1)">. React never uses eval internally so this class of attacks cannot happen on the server.

In order to exploit that vulnerability, improper string concatenation is the most used technique (see SQL injection).

    var untrustedURL = 'x" onerror="alert(1)';
    document.body.innerHTML += '<img src="' + untrustedURL + '">';
In React, you don't use string concatenation to build the Virtual DOM. This way you cannot fool React into setting properties that the developer didn't explicitly let you.

    React.renderComponent(document.body, <img src={untrustedURL} />);
    React.renderComponent(document.body, React.DOM.img({src: untrustedURL}));
Another advantage is that each value in React world is typed. It is either a string which is always escaped, or a component. You cannot turn a string into a component unless the developer explicitly let you.

React has been designed with security in mind and prevents by default a large amount of attack vectors that exist in the browser environment.




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: