
Googlebot's recent improvements might revolutionize web development - workhere-io
http://blog.workhere.io/googlebots-recent-improvements-might-revolutionize-web-development/
======
jacquesm
One potential problem here is that google will use this to widen the gap
between it and the 'one page apps' web and other search engines (such as
duckduckgo) that can't match it in resources.

How strong of an advantage that will be in the long run is uncertain, I would
rather see a web that ships pages with actual content in them than empty
containers for a variety of reasons (most of which have to do with
accessibility and the fact that not all clients are browsers or even capable
of running javascript).

This 'new web' is going off in a direction that is harmful, coupled with the
mobile app walled gardens it is turning back the clock in a hurry.

I'm fairly sure this is _not_ the web that Tim Berners-Lee envisioned.

~~~
iLoch
It's not difficult to set up middleware that'll render the page for any
clients that require it. (For instance, we can assume any client that
identifies as "bot" that's not Google probably wants a pre-rendered page,
which we can do quite effortlessly. Here's one implementation for Nodejs:
[https://prerender.io](https://prerender.io), or you can always roll your own
with something like Phantom.js.

~~~
BorisMelnik
wow this is amazing. would love to see an offshoot of this where it could
render a sitemap, or even keep a live sitemap up to date via cron.d or
something (just hoping out loud)

~~~
gildas
You can use SEO4Ajax [1] to crawl your SPA and generate an up to date sitemap
dynamically.

[1] [http://www.seo4ajax.com/](http://www.seo4ajax.com/)

------
andrenotgiant
Taking a step back: The "Page" paradigm is still very much alive, despite
these recent javascript parsing advances.

1\. Google still needs a URL-addressable "PAGE" to which it can send Users.

2\. This "PAGE" needs to be find-able via LINKS (javascript or HTML) and it
needs to exist within a sensible hierarchy of a SITE.

3\. This "PAGE" needs to have unique and significant content visible
immediately to the user, and on a single topic, and it needs to be
sufficiently different from other pages on the site so as not to be discarded
as duplicate content.

~~~
mixonic
I'd debate the phrase "step back". If you replace all your references to PAGE
with URL, you get closer to a real meaning.

URLs for single-page applications are a serialization of application state.
The fact that we now have an application platform (JavaScript/HTTP) providing
sharable, mostly-human-readable state sharing (URLs) and is also _indexed and
searchable_ is nothing short of incredible.

Yes, the basic abstractions we use are the same. We will have URLs that
address content in our applications. But now these are applications running on
Google's own servers. Google is running my application (and hundreds of
thousands more), and trying to understand what they mean to humans. This is a
pretty amazing step forward.

Imagine Apple announcing it would run all iOS applications, interacting like a
user to build a search index. IMO, this parallel shows what makes Google's
commitment to running JavaScript apps exciting.

~~~
andrenotgiant
The point I was trying to make is this:

With every new capability from Googlebot comes new opportunities for us to
screw it up as developers.

If we were to replace PAGE with URL, and URL is simply a serialization of
application STATE, we could easily end up with infinite URLs that lead to
STATES that are not really that different, unique or appealing as answers to
queries users type into Google.

When deciding how to build Search-accessible Web Apps, and specifically what
to expose to Google, we need to keep in mind that Google likes PAGES that
follow the requirements I detailed above.

------
blauwbilgorgel
Create web applications, rank as a web application.

Create web pages, rank as a web page.

This is a band-aid by Google. Developers created inaccessible websites (JS-
only, no HTML fallback) and Google still wanted to give those sites a chance
to be in the index. Like when Google made it possible to index text inside
.swf movies. This did not mean that flash sites suddenly ranked alongside
accessible websites. No, it only meant that you could now find content with a
very targeted search query.

Don't think you are gaining any SEO-benefit from one-page JS-only
applications, just because Google made it possible for you to start ranking.

And don't forget your responsibility as a web developer to create accessible
content. Forgetting progressive enhancement, fallbacks, a noscript explanation
for why you need JS, ARIA is devolution. If Google can index your site, but a
blind user has a problem with your bouncy Ajax widget, then you failed
catering to all your users. If you lazily let Google repair your mistakes,
then soon you will be a Google-only website.

~~~
mixonic
100% FUD.

There is no evidence that Google is going to punish my website for being
rendered with JavaScript, as you imply with your first two comments.

Google is indexing the HTML generated by JavaScript, and the links in that
HTML. Not some non-web custom format like SWF.

JavaScript driven sites work just fine with modern screen-readers.
[https://developer.mozilla.org/en-
US/docs/Web/Accessibility/A...](https://developer.mozilla.org/en-
US/docs/Web/Accessibility/An_overview_of_accessible_web_applications_and_widgets)
and in 2014 97.6% of screen-readers ran JavaScript
[http://webaim.org/projects/screenreadersurvey5/#javascript](http://webaim.org/projects/screenreadersurvey5/#javascript)

In 2013, 92 out of 93 visitors to a UK government webpage supported
JavaScript: [https://gds.blog.gov.uk/2013/10/21/how-many-people-are-
missi...](https://gds.blog.gov.uk/2013/10/21/how-many-people-are-missing-out-
on-javascript-enhancement/) And mixed into that 1.1% were users getting broken
JS, behind firewalls, disabling JS, etc.

Google making this change does not force you to build a JavaScript-driven
website, but it does make it more attractive .

~~~
blauwbilgorgel
If I wanted to imply that Google will punish your website for being rendered
with JavaScript, I probably would have said so. It would likely be false too,
as it is less of a punishment, than it is not maximizing your chance to rank
(to put your best foot forward as a website).

Accessibility is not a numbers game. In many countries it is a legal
requirement. And adhering to the WCAG means providing non-JS fallbacks or
progressive enhancement. RMS not being able to access your content is an
accessibility issue too, it does not have to involve a disability. It can be
technical in nature, like disabling JS or being behind a corporate firewall,
or your browser not supporting pushstate.

If you want to look at stats, take a look at the stats and surveys on
accessibility of dynamic web applications. Just because your screenreader
supports JavaScript does not mean you have no accessibility issues due to
JavaScript. Rich internet applications should use WAI-ARIA. I don't think
people who create websites without a fallback (avoiding this issue entirely),
will worry about creating websites with ARIA-support. And if they do care
about such accessibility, they should also provide a non-ARIA non-JS fallback.

Google making this change makes it possible to have your non-fallback JS-only
application be indexed. It does not make it more attractive from an SEO or
accessibility viewpoint.

~~~
mixonic
Web accessibility, as we commonly use the term, pertains to creating a website
that disabled users can interact with an navigate. It does not pertain to
those who choose to or are forced to disable JavaScript (the RMS example).
Creating an accessible site is a challenge regardless of what technologies you
pick, for sure. Just as saying "just because your screenreader supports
JavaScript does not mean you have no accessibility issues", just because your
website uses JavaScript doesn't mean you have accessibility issues. A plain
HTML website can have accessibility issues. So can a JavaScript one.

AFAIK, nothing in WCAG says you must have a non-JavaScript fallback to adhere
to their standard. If you can back that up I am all ears, I would be
interested to read it.

> Google making this change makes it possible to have your non-fallback JS-
> only application be indexed. It does not make it more attractive from an SEO
> or accessibility viewpoint.

The attractiveness of JS heavy development is not in an inherent SEO or
accessibility benefit. Absolutely true.

The benefit is a development style that is more productive, giving me more
time as a developer to focus on solving the problem at hand, be it business
logic, SEO, or accessibility. You can debate this benefit, but don't imply
that single-page apps cannot have SEO on par with HTML sites and good
accessibility.

~~~
blauwbilgorgel
>Web accessibility, as we commonly use the term, pertains to creating a
website that disabled users can interact with an navigate. It does not pertain
to those who choose to or are forced to disable JavaScript...

Often web accessibility focuses on people with a disability, correct.
Accessibility, like I said, really is more than that, though. From the Wiki:
_Accessibility is the degree to which a product, device, service, or
environment is available to as many people as possible._ Hence it does pertain
to those who choose to or are forced to disable JavaScript. It literally means
_as many people as possible_, RMS included. Even the WCAG do not solely focus
on assistive technologies, but include "a wide variety of user agents".

The comment "just because your screenreader supports JavaScript does not mean
you have no accessibility issues" was in reply to your statistics on JS-
support for screenreaders. 98% of screenreaders supporting JavaScript is moot
when less than 75% of browsers support pushState. In other words: You leave
much more than 2% of users incapable of accessing your content. WebAim Surveys
show that people have increasingly more trouble accessing content on JS-heavy
social sites and dynamic web applications.

> A plain HTML website can have accessibility issues. So can a JavaScript one.

A JavaScript site can have a problem. If you serve it without a fallback
(under the assumption that 98% of your users can access it that way) then it
has a problem for sure. I have nothing against JavaScript. I have a problem
with JavaScript sites that don't provide a fallback or weren't build according
to progressive enhancement principles.

>AFAIK, nothing in WCAG says you must have a non-JavaScript fallback to adhere
to their standard.

It said so specifically in WCAG 1. WCAG 2 is more ambiguous. You can have a
no-fallback application that requires JavaScript provided: You can not show
the content in any other way (a fallback is impossible), and you clearly
explain in <noscript> why JavaScript is required.

Where a fallback IS possible, not providing one lowers accessibility. This is
the relevant principle:

 _Principle 4: Robust - Content must be robust enough that it can be
interpreted reliably by a wide variety of user agents, including assistive
technologies._

If you do not provide a fall-back and require JavaScript then your content can
not be interpreted reliably by a wide variety of user agents. Not providing a
fall-back goes against this principle.

Relevant guideline:

 _Guideline 4.1 Compatible: Maximize compatibility with current and future
user agents, including assistive technologies._

JS-only non-fallback sites do not maximize compatibility, they minimize it,
breaking this guideline.

Government 508 guidelines for accessibility:

 _When possible, the functionality should be available without requiring
JavaScript. When this is not possible, the functionality should fail
gracefully (i.e., inform the user that JavaScript is required)._

Webaccessibility.com best practices:

 _Ensure elements that use ARIA provide non-ARIA fallback accessible content._

Since you should markup your rich web apps with ARIA, and you should provide a
non-ARIA accessible fallback, you should provide an accessible fallback for
your rich web app.

I do know that this can be a point of debate, and that is fine. It is up for
interpretation what "maximize compatibility" means to you. If you have legal
obligations to maximize compatibility (like government organizations in The
Netherlands) then this becomes a harder rule.

> The benefit is a development style that is more productive, giving me more
> time as a developer to focus on solving the problem at hand, be it business
> logic, SEO, or accessibility.

I really don't understand this way of thinking. If you want to spend time on
accessibility, start with a fallback, don't create a website without a
fallback and then cheer on the idea that now you have time left to fix the
problem you created a few minutes before that...

If you want to solve problems with SEO, don't start out by creating one :D

> but don't imply that single-page apps cannot have SEO on par with HTML sites
> and good accessibility.

Good accessibility means good SEO. No fallback means poor accessibility. Draw
your own conclusions (Socrates is mortal?).

------
rcsorensen
Sending the framework of a page to your users and expecting them to do all the
heavy lifting and slow loading of constructing the page and fetching the data
is still rather unfriendly if you can afford a server to construct it.

If you love your users, give them HTML and let the Javascript enhance it.

Projects like Facebook's React ( [http://facebook.github.io/react/docs/top-
level-api.html#reac...](http://facebook.github.io/react/docs/top-level-
api.html#react.rendercomponenttostring) ) and Rendr
([https://github.com/rendrjs/rendr/](https://github.com/rendrjs/rendr/)) let
you use server rendering as well as the single page technologies on the client
side.

~~~
marknutter
The lifting ain't that heavy. And besides, wouldn't you want your server
spending its precious cycles on things the clients absolutely cannot handle?

~~~
ssorallen
The lifting can be heavy for mobile devices with slow CPUs and limited battery
life unlike the servers running your site. Also if your server renders the
site and some of the pages are public, the server can cache the HTML and let
the web server serve cached HTML rather than render the page in the web
framework for each request.

~~~
acdha
> The lifting can be heavy for mobile devices with slow CPUs and limited
> battery life unlike the servers running your site.

The reverse is also frequently true: if client rendering can improve
cacheability or reduce the data going over the wire the radio savings will pay
for a LOT of text updates. Similarly, if you can transfer a large listing
without creating thousands of DOM nodes the results can be a wash depending on
exactly how much data, the browser, etc.

There isn't a single right answer here - it really depends on the application
and good analysis.

------
tragic
Like many, I am suspicious of the rather overbearing claims made on behalf of
the SPA architecture.

I just launched a website. It's a weekly periodical with political analysis,
word-count on articles 1500-6000. It needs to carve up the content in a few
different ways (categories, issue numbers etc), decorate an article with links
to other relevant content, and provide a nice CMS for non-tech people to use.
So it's on Django, with the regulation sprinkling of JQuery. (If it were only
techies updating it, you could probably do it with a static site generator...)

To me, the idea that you'd try and force something that is plainly a big
collection of pages into a 'single page' is just philosophically bizarre, like
printing Moby Dick on a square mile of paper, using some amazing origami
skills to present it to the reader, all in order to save a bit of effort at
the paper mill.

The googlebot business is one aspect of a bigger issue, which is that a
website needs to be consumable by a host of different clients. I don't see how
you can do the SPA thing without making major assumptions about those clients.

Sometimes, of course, those assumptions can be justified - it depends on the
job. And Angular etc are enormously fun to play with, and handled well can
enable a great UX for certain jobs. But I don't think it's 'the future'. It's
another tool in the box.

Relevant here, a nice talk by John Alsopp from Full Frontal 2012:

[https://www.youtube.com/watch?v=KTqIGKmCqd0](https://www.youtube.com/watch?v=KTqIGKmCqd0)

EDIT: clarification

------
xpose2000
I'm not sure if this announcement changes anything. The bottom line is to make
apps for the end user. Google is simply saying that those best practices are
now crawlable in a way that is very mature. The same rules still apply.

A simple guide can be found here:
[https://developers.google.com/webmasters/ajax-
crawling/](https://developers.google.com/webmasters/ajax-crawling/). Although
I suspect it needs to be updated since its from 2012.

If you create an application, make sure it alters the URL when applicable. For
simple apps, the following repos will be useful:

The old way, that still works: [https://github.com/asual/jquery-
address](https://github.com/asual/jquery-address)

The better way, preferred:
[https://github.com/browserstate/history.js](https://github.com/browserstate/history.js)
or [https://github.com/defunkt/jquery-
pjax...](https://github.com/defunkt/jquery-pjax...). not sure which is better
to be honest. Feel free to chime in.

~~~
rpedela
fixed link: [https://github.com/defunkt/jquery-
pjax](https://github.com/defunkt/jquery-pjax)

------
ssorallen
> Single page apps are not a new concept, but up until now they were typically
> a bad solution for public websites that depend on hits from search engines

If your users (I'm talking humans, not bots) have to download a mountain of
JavaScript and execute it before seeing any content, your site is slower than
it could be for everyone. We should stop saying that "single page apps", i.e.
sites rendered in JavaScript in a browser, are bad because they can't be
scraped by a bot. They are bad for EVERYONE who wants to view the site because
of the network and CPU time it takes to download the assets and render the
site in the browser.

~~~
scotth
Doesn't it all depend on how long they spend on the site? If it's one page and
bounce, sure, that's a terrible experience. More pages than that? Now we're
starting to see savings.

~~~
ssorallen
If you serve a rendered page of HTML with CSS links, browsers can
progressively render the page as it is downloaded. Users will notice that on
the first page load, particularly on higher latency connections where round
trips for resources like JavaScript files are expensive.

~~~
scotth
My point still stands. Time to load is only part of the equation.

------
acqq
So you want to have the URL for every content but you don't want to provide
the content as HTML, instead expecting that once the page is by client, the
client only then separately loads the content? Only because it's easier to you
to program, you want to deliver me the content much slower than you can?

There is some strange logic there.

~~~
marknutter
Slower in some respects and faster in others. Suppose I cache the data I get
back from the server for page 1 and page 2 of content. Now, if the user
switches between page 1 and page 2 they don't needlessly ask the server for
the HTML every time like they do when relying on the server to render
templates.

And I'm not sure where you're getting that it's "easier to program" single
page apps than it is to simply rely on the server to render html on the
server. The fact that it's _not_ easy is the very reason we have so many
competing front-end frameworks to solve the problem elegantly.

------
grey-area
_With the new improvements to Googlebot, single page apps will likely advance
from being niche solutions for non-public websites to being the default way to
build websites. A website will contain a single HTML page (typically heavily
cached and served via a CDN). The JS on that page will then fetch content (as
JSON) from the server and change the path as necessary using pushState._

I find the cheerleading for single page websites disconcerting and the
proposed benefits unconvincing. Why should this be the default way to build
websites? A few desultory upsides are presented without a full consideration
of the multiple downsides to client-side development.

The biggest advantage of thick-client architecture is sending less data to the
client and, if you like using javascript, writing everything in js, but there
are multiple downsides compared to more traditional thin-client websites -
load times which depend more on client capabilities (hugely variable and out
of your control) than servers, dependence on js on the client, loading pages
while your content is placed in the dom by js, forcing everyone to write in js
instead of switching language on the server whenever they like, ignoring the
simple document model of html served at predictable URIs, which has served the
web so well and means you can use dynamic or static documents, full documents
can be cached for very quick serving and by intermediaries, etc, etc. Of
course some of these can be overcome, but there are serious obstacles, and the
advantages are meagre to non-existent unless you enjoy javascript and feel its
the only language you'll ever need.

For someone who doesn't like working in js, and/or doesn't have a huge amount
of logic already in js (many websites work just fine with some limited ajax),
trying to force every website into the procrustean bed of client-side
development is not an appealing prospect. I can see why it appeals to those
who have already invested in js frameworks, but predictions of its future
dominance on the web, like predictions that eworld, activex or mobile would
replace the web, are overblown.

I suspect the birth and death of Javascript will be a footnote in the history
of the web, rather than taking it over as this article suggests. If anything
we should be looking to replace our dependence on js, not making it mandatory.

------
needs
I can't believe that single page webapp are easier to write than true old
website. Maybe you can gain some performance improvement but if you use a
framework you will loose this very little gain. Those who claims that
developpement is easier with framework on a single page have too learn
programming, because for most case, the "old" way works very well and is
incredibly faster than a bloated javascript page.

I really, really dislike this approache of doing website, it make the code and
the design hard to understand and cost a lot of problem when comes the time to
debug or made major changes.

There are no magic when using javascript, it will slow down the client, and
manipulating DOM is very slow. Doing things server side cost nothing compared
to javascript. Remember that loading a webpage is very fast when you have only
a CSS and html code, because it very easy to put it in cache and doing some
pretty nice optimizations on it.

With frameworks, making "webapp" become a huge nigthmare, things becomes
overly complex and bloated, the request as to pass trough a lot of layers
before end up somewhere, and the developpement is not faster than have a
custom code. Good framework don't make good programmer.

When the article say "put the CSS inside a <style> tag on the page - and the
JS inside a <script> tag", it's just horrible, fuck it.

~~~
workhere-io
_Those who claims that developpement is easier with framework on a single page
have too learn programming, because for most case, the "old" way works very
well and is incredibly faster than a bloated javascript page._

Who says the page will become bloated with JS just because you use clientside
loading? The mechanism I'm talking about can be done with something like 10
lines of JS or less. No one's saying you have to use AngularJS with every web
page you make.

 _When the article say "put the CSS inside a <style> tag on the page - and the
JS inside a <script> tag", it's just horrible, fuck it._

First of all, this is not a requirement of single page apps, just an option.
Secondly, when you're developing, you would still have your JS and CSS in
separate files. Your compiler would then minify the whole thing and put it
inside your minified HTML file.

------
CMCDragonkai
Not every single search engine will be able to scale a JS virtual machine for
all the pages they need to index. There are also social network bots that you
might like supporting such as Facebook and Twitter which will not be able to
crawl javascript either.

At any case, if you want to have a solution to this SEO problem now, I created
SnapSearch ([https://snapsearch.io](https://snapsearch.io))

------
bhartzer
I wouldn't actually call this "recent" improvements. I mean, Google has been
handling JavaScript for years now. And they're just now coming out and
publicly saying it. Which is typical Google.

~~~
workhere-io
What they were saying before was that you always need a HTML fallback for JS-
generated content. Now it seems they're saying you don't necessarily need to.

------
drakaal
Optimism rather than fact.

There is a lot more to it. I am pretty well known as an SEO, and while I would
love this to be true it isn't.

Google's improvements to GoogleBot are mostly targeting spam, and obfuscation
of content. The ideas is not to discover content as much as it is to avoid
having content hidden.

Previously you could have a webpage that appeared to be about Puppies, but
then used any number o dynamic methods to instead show naked cam girls. Google
worked to fix this.

Google is now doing some indexing of named anchors, and this allows for
linking to a page with in a page as it were. But that is a Long ways from
building indexable single page applications.

-Brandon Wirtz SEO (formerly Greatest Living American) [http://www.blackwaterops.com](http://www.blackwaterops.com)

------
basseq
This is a great improvement, but I'm struck by two things:

1\. The state of JavaScript-only application development is still nascent. The
number of JS-only sites I see that are buggy, don't use PushState correctly,
or have other shortcomings is growing faster than the overall trend. Not that
it can't be done well, but if your JavaScript-only "app" is really just a
standard website, you might want to re-think your approach.

2\. There has to be a better way. There are distinct benefits to approaches in
caching content and providing feedback, but JavaScript seems to be a kluge-y
approach. It reminds me of frames back in the day. Some of this is browser
support; some of this is lack of standardization; some is perhaps a missing
piece of the HTML spec; etc.

------
workhere-io
Author here. Some of you are saying that this will lead to bloated, JS-heavy
websites. I disagree. The JS necessary for making a single page app can be
done with something like 10 lines of JS (plus jQuery or something similar, but
that is already included in most normal pages anyway).

A single page app isn't JS-heavy by definition, and a "normal" page (with HTML
generated on the server) can easily be JS-heavy. It all depends on how you
program it. Just keep in mind that single page apps don't necessarily need to
use heavy frontend frameworks such as Knockout, Ember or AngularJS.

------
CHY872
This seems like an odd thing to be shouting about.

As far as I can see, throughout this thread the performance benefits of single
page apps are touted as being fantastic, making it worthwhile to use the new
technology etc.

When has performing operations efficiently ever been the domain of the web?
Websites in my experience have the worst performance of almost any software I
use! I've seen developers cite 200ms or longer to load a page as being a good
benchmark - that seems pretty awful to me.

If getting this tiny performance improvement (which often results in poorer
performance on the first load (not ideal for many)) is so critical, why do the
same developers not invest in writing more performant server apps? Yes, often
the database is a bottleneck, but these problems can in general be worked
around (either by use of faster queries or caching etc).

Why attempt to get a small performance benefit by saving 30-odd kB of HTML on
each page load (static and so essentially free for the server), when one could
get a much larger performance benefit by optimising the backend?

Almost all serious sites will still see their page load being limited by the
time it takes to produce the page. It's possible to write really fast websites
(try [http://forum.dlang.org/](http://forum.dlang.org/)) but no one seems to
do it :(

If anything but almost all of your website is static, you won't be saving all
that much time.

~~~
workhere-io
_If anything but almost all of your website is static, you won 't be saving
all that much time._

Single page apps can easily be static (static HTML page + static JSON). The
point of this would be to decrease the download size for each new page visited
by the user.

~~~
CHY872
I think you missed my point. In each web page downloaded there's a bunch of
(basically constant) static data - to download your javascript files, and set
up your document - your template (or similar). This is the only data that
single page apps can eliminate - everything else must either be queried from
the server or can already be cached.

Some sites obviously inline CSS or JavaScript, but that can be eliminated if
necessary (and only affects the first page load anyway).

This information is free to generate on the server side, so it's not slowing
down that computation at all (it's just a stringbuilder function,
essentially). Furthermore, the transfer time is generally not the deciding
factor - it's the server side time to put the rest of the information
together.

To give one example, I went to a typical website - the Guardian (it's a fairly
standard high-traffic news website). Chrome informs me that in order to
request one article, it took 160ms to load the html - 140ms of waiting and
20ms of downloading. Now, the RTT is about 14ms, so that's about 110ms of
generating the web page and 20ms of actually downloading it. It's about 30kB
of compressed HTML (150kB uncompressed), most of it's 'static content' \-
inlined CSS and JS.

Them using the single page model would reduce the page download time (apart
from the first page) by an absolute maximum of 20ms - which means that the
time to load each page has been reduced by about 12%.

This is fine, but almost all of the data is just the result of string
concatenations and formatting - i.e. free processing (or at least almost-free
processing). It's getting the rest of the data together that's somehow taking
the 100ms (or crap implementations).

The cost of moving data around on websites is typically small compared to the
actual production time of the content. That's why we see people preferring to
inline huge amounts of CSS etc on each web page and having people download it
time after time - because it's only about 10kB compressed the data transfer is
inconsequential, and normally is dominated by the RTT.

Spending all the time writing these frameworks because of performance benefits
is a fallacy - the data still has to be generated somewhere, and if it happens
dynamically it's slow as hell. The savings can never become that great - at
most they lead to 20-30ms of improvements if bandwidth is acceptable.

Writing the frameworks because they make development easier is a much more
reasonable argument.

This still all detracts away from the fact that non-static websites are
typically dog slow and they shouldn't be.

------
PinguTS
The single page app has another _huge_ drawback.

The reload via JS fails silently, when you are on a bad Internet connection,
like I am currently here in Nepal. You can not simply do a reload, like with a
simple HTML page.

For example, Facebook is here unusable, because of the same issue. It works
only, when you request the mobile site of Facebook in the browser.

So please, get rid of that damn JS, if you care about your user base and
usability.

BTW: that problem also happens on bad hotel Wifi in the US.

------
adamconroy
Somewhat off topic, but how do single page apps deal with people hacking the
JS? For example, if as a particular user I am only allowed to perform certain
functions within the app, and that functionality is contained in the JS, then
it doesn't seem like it would be very hard to modify the JS to enable the
functionality I shoudn't be allowed to use.

~~~
ailox
Usually the functionality of the app exists in the backend, which would be
server side. No matter what you do on the frontend, there should be no way for
you to trigger actions in the backend you were not authorized to perform.

------
allendoerfer
This news article seems to come up every few years. Nevertheless, the niche of
apps, which can profit of this, is quite small.

Either you have a highly interaction-heavy web-app, where it makes sense to
execute most of the code on the client and deliver the content as JSON or you
have a content-heavy website, where it makes sense to deliver cached content
to the client.

There are some apps in between, which are highly interactive and content heavy
like web versions of social apps. For them additionally the question arises,
if they _want_ to be crawled by Google or Google wants to index their content.

To profit from this, you need an app, which content the users search for and
interact with several times after they have found it. So i guess,
"revolutionize" seems a bit much to me.

------
slashdotaccount
Please use this instead:
[https://en.wikipedia.org/wiki/Progressive_enhancement](https://en.wikipedia.org/wiki/Progressive_enhancement)

~~~
SquareWheel
But we're talking about making AJAX pages robot-friendly, not making regular
pages mobile-friendly.

------
kayoone
Loading off most of the work to the client has its downsides too. If that was
a reality i would assume that mobile devices would require quite a bit more
battery power to render basic webpages.

------
h1karu
google is not the only search engine.

~~~
workhere-io
Which I emphasized in the post :)

------
justinph
We're still using hypertext transfer protocol, right? You need to send
hypertext down the wire.

We shouldn't let one company, google, dictate how the web works, simply
because of their proprietary technological innovation.

~~~
o_____________o
Roads are meant for horses, right? We shouldn't let one company, Ford, dictate
how the roads work.

My man, if we only used infrastructure and technology in the way it was
originally intended and narrowly imagined, the world would be a dim place.

~~~
justinph
Ok, I see your point.

But, have Yahoo or Bing or DuckDuckGo made the transition to be able to crawl
the web with a full JS & DOM rendering engine? I doubt it. By eschewing that
compatibility we're setting a very high bar for what any competitor to google
would have to achieve.

I _like_ google. I just don't think it's good to have one company own a market
so completely.

~~~
workhere-io
_But, have Yahoo or Bing or DuckDuckGo made the transition to be able to crawl
the web with a full JS & DOM rendering engine?_

They can just use PhantomJS ([http://phantomjs.org/](http://phantomjs.org/)),
which is free and open source.

~~~
voltagex_
They could, but I wonder what it'd take to scale it to crawling that number of
pages.

I think only Bing would have the cash and resources to build that.

