
Google updates indexing to execute JavaScript - edsykes
http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html?utm_source=javascriptweekly&utm_medium=email
======
gavinpc
This was posted earlier this week:

[https://news.ycombinator.com/item?id=7805144](https://news.ycombinator.com/item?id=7805144)

What I'm wondering is, if you have a server than supports "both" methods
(dynamic page and "fallback"), how do you know which one to serve? And should
they be at different addresses? If they weren't, wouldn't this break caching?
If they were, how can you redirect from a "noscript" tag if you have um, no
script? Etc etc.

~~~
tjgq
Google has a system [0] whereby their crawler appends a special parameter to
the query string to signal that you should serve a static, "indexable"
version.

What I get from this announcement is that their crawler is becoming good
enough at executing dynamic pages that having to serve a separate static
version may soon become unnecessary.

[0] [https://developers.google.com/webmasters/ajax-
crawling/docs/...](https://developers.google.com/webmasters/ajax-
crawling/docs/specification)

~~~
evv
Personally, I hate and avoid the practice of building twice, once for SEO and
once for usability.

I am always careful to build dynamic apps which render the HTML correctly on
the server. Its handy not just for SEO. It also allows you to support legacy
browsers and it dramatically decreases load times.

But if google is the only search engine you care about, and load times and
legacy browsers don't matter to you, by all means, continue building one-page
JS apps. There are often less headaches to be had when you go the simple
route.

~~~
edsykes
do you mean that you serve the dynamic html from the server so that it appears
static to clients, or that you render what happens on the client on the server
if the googlebot is crawling?

~~~
evv
I've been using react on node.js to pre-render the entire site as it would
appear with the dynamic client-side app.

Each app uses little wrapper libaries to agnostically behave the same way on
client/server. Both the client and server environment have access to routing
functions and cookies, using redirects and headers on the server and pushstate
on the client.

These apps are much more cross-platform and quick because the app is visible
as soon as the css loads. The app will mostly work before the client js
launches, because all links are generated by the router and injected into the
anchor href by react.

The idea is to have a genuine, working html & css site with a dynamic layer
when the browser supports it.

Maybe I should start a blog on some of these topics..

------
habosa
Previous discussion:
[https://news.ycombinator.com/item?id=7790227](https://news.ycombinator.com/item?id=7790227)

~~~
edsykes
ah nice one, no mention of google in the title so I missed this.

------
arasmussen
This seems a few years late... given that a ton of content is generated with
JS nowadays and it has been this way for years

