
JDK8 + Facebook React: Rendering single page apps on the server - augustl
http://augustl.com/blog/2014/jdk8_react_rendering_on_server/
======
vjeux
> I would imagine it's possible to greate "progressive enhancement" stype
> React components, so that you'll yield virtual DOM for a plain input box,
> and somehow enhance it with a auto-complete JS.

You can put initialization code that needs to be done both in client and
server in componentWillMount. Code that needs to be executed only in the
client (ex jQuery integration) should be in componentDidMount which is only
executed in the client.

I'll make sure it is documented. Edit:
[https://github.com/facebook/react/pull/1288/files](https://github.com/facebook/react/pull/1288/files)

~~~
polskibus
Thank you for that. I was about to ask whether react on the server + react on
the client is possible (instead of exclusively server or exclusively client).

Is there a tutorial targetting such scenario specifically?

------
kodablah
Speaking of reactive DOM and JVM languages, this [1] impressed me greatly
about a month ago. It's using Scala.js which is coming along fast w/ Scala.Rx
and ScalaTags. I found it's a great way to share rendering code between the
server side and client side. I was also impressed w/ the readability and small
size of the TodoMVC app [2].

1 - [https://groups.google.com/d/msg/scala-
js/DtRyjfD6qqA/Bfd2pDC...](https://groups.google.com/d/msg/scala-
js/DtRyjfD6qqA/Bfd2pDCQIsUJ) 2 - [https://github.com/lihaoyi/workbench-
example-app/blob/todomv...](https://github.com/lihaoyi/workbench-example-
app/blob/todomvc/src/main/scala/example/ScalaJSExample.scala)

------
Aqueous
It is not necesssary to execute JavaScript on the server-side, or to use
React.js at all (although it is a pretty neat tool.) For instance in Backbone,
you can call setElement() on any view to replace its on-screen element with
one that you've pre-rendered. If you're clever about templates this means that
you can have views that both know how to build themselves up from zero, and
can also attach to existing views on screen so you can save yourself that
rendering time when you visit the page.

This could result in code duplication, but the end result is the same: a
complete state of the application can be rendered server-side and then can
evolve independently on the client side.

~~~
thefreeman
How are you going to make Backbone calls without Javascript? I believe the
author's goal was to allow his single page web app to be properly indexed by
search engines (which don't execute javascript) without having to change any
of his code.

~~~
salman89
What he's saying is that the views you want to be SEO'd can be generated
server side and bootstrapped into Backbone (hook backbone views into an
element that is rendered server side).

I currently do this too - I'll add that it is helpful to use a templating
language that will work both server side and client side so you only write
your templates once.

What does become a pain is doing routers/controllers both client side and
server side - end up writing some code more than once and in different
languages. Harder to maintain, especially when changes are made.

~~~
thefreeman
Right, and if you want to generate them server side, you either need to have
separate templates for server side and client side, or you can execute the
javascript on the server (which is what the OP decided to do).

Noone is saying there aren't other solutions to the problem, but I think the
OP's solution met his goals pretty well and I thought it was a neat concept.

~~~
chii
> you either need to have separate templates for server side and client side

there exists templating engines that run on both server and client - for
example, google closure templates.

------
cbr

        If all your content is generated by a single page web app
        that downloads data and executes JS, your site won't get 
        crawled at all - no popular search engines executes JS.
    

GoogleBot does actually execute at least some javascript:
[https://twitter.com/mattcutts/status/131425949597179904](https://twitter.com/mattcutts/status/131425949597179904)
[http://googlewebmastercentral.blogspot.com/2011/11/get-
post-...](http://googlewebmastercentral.blogspot.com/2011/11/get-post-and-
safely-surfacing-more-of.html) and [http://www.jefftk.com/p/googlebot-running-
javascript](http://www.jefftk.com/p/googlebot-running-javascript)

~~~
m0th87
It's not worth relying on IMO. We had some pretty straight-forward backbone
views populated by AJAX content in production, and it was never crawled. In
general it's very unclear what capabilities the crawler is capable of.

------
sdesol
This is sort of a tangent question but is relevant to Nashorn and server side
JavaScript execution. Is there a standard practice for sandboxing server side
JavaScript execution?

I'm asking because my product has what I call "Smart Attributes" which are
basically programmable context aware metadata. These "Smart Attributes" are
designed to be attached to a Git commit, GitHub pull request, diff lines, etc.
and can be programmed in JavaScript to react to what it is attached to.

In the beginning I gave the user the ability to execute their Smart Attribute
scripts on the server side but I eventually removed that option because I
didn't have the time to fully think it through. Basically I was paranoid about
missing something that would allow them to do dangerous things on the server.
This was a couple of years ago and I was using Rhino from Mozilla.

Now that Nashorn is available, I've been thinking about JavaScript server side
execution again, and was wondering if there were any best practice sandboxing
methods and/or libraries. If anybody knows of any good documentation or
libraries for sandboxing in Nashorn, I would love to hear about it.

------
y0ghur7_xxx
> _Serving HTML from the server is great for search engines._

Google and the other search engines should go with the times and start to
render and index single page js applications just like they index html pages.
Executing js on the server and get the resulting output is a solved problem.
JS is enabled by default on all browsers that matter (and very difficult to
disable), I don't understand why we still have to play nice with search
engines and render the html on the server as well just for them.

~~~
spyder
Google is already executing js:
[https://www.google.com/search?q=google+executes+js](https://www.google.com/search?q=google+executes+js)

~~~
y0ghur7_xxx
> _Google is already executing js_

Any idea then why people keep saying¹ that "Serving HTML from the server is
great for search engines"?

And why should we use this² when google is perfectly capable of executing the
js and get the html output all by itself?

¹[citation needed]

²[https://developers.google.com/webmasters/ajax-
crawling/docs/...](https://developers.google.com/webmasters/ajax-
crawling/docs/specification)

~~~
augustl
That link describes a method for translating a URL where a static version of a
dynamic page can be found. So you still have to serve and generate static
content yourself. So we're not quite there yet. But seems like at least Google
has started executing _some_ JS. There are more search engines than Google
though :)

------
polskibus
Has anyone compared Nashorn vs nodejs (V8) yet?

~~~
swannodette
Nashorn seems to be in SpiderMonkey territory at the moment,
[http://wnameless.wordpress.com/2013/12/10/javascript-
engine-...](http://wnameless.wordpress.com/2013/12/10/javascript-engine-
benchmarks-nashorn-vs-v8-vs-spidermonkey/), which is not too shabby. I know
that a lot of performance work is being done so it may approach V8 in time.

------
fauigerzigerk
It would be great if Nashorn could be used as part of a headless browser but I
think that's still a long way off it it ever comes to pass.

Browsers have become extremely complex beasts. It is very difficult to
simulate one using anything but the original browser code. So my bet is still
on phantomjs.

I firmly believe that search engines will have to execute JavaScript as well
(if they don't already do it). I think this is just Google's attempt to put
off the inevitable for a little while longer:
[https://developers.google.com/webmasters/ajax-
crawling/docs/...](https://developers.google.com/webmasters/ajax-
crawling/docs/specification)

~~~
swah
Its just for expanding templates, I thought simple javascript processing would
do, not browser related.

------
skrebbel
Nice article! We're currently also coding a single page Web app with React,
and we're planning to do the same. I wonder, though, why do the server side
part in the Jvm and not node? My idea was to use the same API-calling code on
the client and the server, and just have a Node instance that lives on the
same server as the backend make those Http requests when serving the first
page.

Any reason not to do this? (we're on Mono, not JVM but there's plenty of C# JS
engines too

~~~
augustl
Calling out to Node is certainly not the end of the world :) I just wanted so
show how convenient it is to do when you're on a JVM anyway. Only actual
benefit is the API part, i.e. not doing remote calls, just invoke your API in
procrss.

------
jevinskie
Nashorn looks like it provides a very nice bridge. It looks like a solid bet
if you want to interact with JavaScript without using JavaScript itself.

~~~
swah
The ultimate software architecture:
[http://c2.com/cgi/wiki?AlternateHardAndSoftLayers](http://c2.com/cgi/wiki?AlternateHardAndSoftLayers)

------
benmmurphy
does this mean xss will run arbitrary code on the server :)

~~~
vjeux
XSS works because the browser uses "eval" for some attributes, for example
<img src="x" onerror="alert(1)">. React never uses eval internally so this
class of attacks cannot happen on the server.

In order to exploit that vulnerability, improper string concatenation is the
most used technique (see SQL injection).

    
    
        var untrustedURL = 'x" onerror="alert(1)';
        document.body.innerHTML += '<img src="' + untrustedURL + '">';
    

In React, you don't use string concatenation to build the Virtual DOM. This
way you cannot fool React into setting properties that the developer didn't
explicitly let you.

    
    
        React.renderComponent(document.body, <img src={untrustedURL} />);
        React.renderComponent(document.body, React.DOM.img({src: untrustedURL}));
    

Another advantage is that each value in React world is typed. It is either a
string which is always escaped, or a component. You cannot turn a string into
a component unless the developer explicitly let you.

React has been designed with security in mind and prevents by default a large
amount of attack vectors that exist in the browser environment.

