
Principles of Rich Web Applications - rafaelc
http://rauchg.com/2014/7-principles-of-rich-web-applications/
======
baddox
This is a great article. It's extremely thorough, and touches upon most the
difficulties I've encountered in my (limited) experience coding JS on the web,
as well as several I hadn't even considered.

My only complaint, if it can be considered a complaint, is that the author
doesn't address the real-life costs of implementing all his principles. The
main one is the complexity of the server- and client-side architecture
required to implement these principles, even for a minimal application like
TodoMVC [0].

I agree that user experience is extremely important, and perceived speed is
fundamental, but I certainly don't think it's important enough to justify the
cost of figuring out how to implement all these principles, especially for a
startup or other small team of developers.

Of course, the hope is that tooling will quickly progress to the point that
these principles come essentially for free just by following the best
practices of whatever libraries/frameworks/architectures you're using. There
was probably a time where the basic principles of traditional static web apps
(resourceful URLs, correct caching, etc.) also looked daunting for small
teams, but that's quite manageable now with Rails/Django/etc. (and maybe
earlier frameworks).

[0] [http://todomvc.com/](http://todomvc.com/)

~~~
enjalot
There is one such framework: [http://derbyjs.com/](http://derbyjs.com/) It's
been in development for about 3 years, so it certainly has a cost to
implement. It is being used in production by Lever (YC) to ship very usable
enterprise software.

There is even a TodoMVC example being submitted for Derby:
[https://github.com/derbyjs/todomvc/tree/master/examples/derb...](https://github.com/derbyjs/todomvc/tree/master/examples/derby)

Another nice thing is that the templating engine can be used client-side only
if you want:
[https://github.com/derbyjs/todomvc/tree/master/examples/derb...](https://github.com/derbyjs/todomvc/tree/master/examples/derby-
standalone)

------
brianberns
JavaScript "is the language of choice for introducing computer science
concepts by prestigious universities"!? God help us all.

I see the link to the Stanford course, but I hope it's still in the minority.
JS is not a language that I would want to teach to newbies, especially if
those newbies are on a path towards a CS degree.

~~~
lmartel
FWIW, CS 101 isn't a required (or popular) Stanford course, and isn't part of
the CS major. The intro series is 106A/106B/107, which are taught in Java,
C++, and C respectively.

There are plans to switch out Java for Python eventually but no major classes
are taught in JavaScript--the closest it gets is a graduate Programming
Languages class that spends a few weeks on JavaScript to illustrate closures
and first-class functions.

~~~
sanderjd
There's no undergraduate class that illustrates closures and first-class
functions (whether in javascript or otherwise)? That seems odd...

~~~
lmartel
The undergrad core (and degree) is 90% Java, C, and algorithms.

To be fair, though, many undergrads take graduate classes and the department
encourages it--the only difference is a 2XX course number instead of 1XX, and
sometimes a bit more rigor.

------
PavlovsCat
I can't stand how the Facebook feed updates in realtime. I read a bit, leave
the tab, and when I come back to it it updates, so I have to find my place
again (which I don't do, I just go "screw facebook" and close the tab ^^). The
same for forums, if I want to see new or changed posts, I'll hit F5 -- and
when I don't press F5, it's not because I forgot, but because I don't want to.
Pressing it for me is a great way to make me go elsewhere; or in the case of
facebook, to stay and resent you.

I don't _need_ to know in realtime how many people are on my website. I need
to know how many there were last week, and compare that with the week before
that. Likewise, I don't really need to see a post the instant it was made. At
least for me, the internet is great partly because people chose what they do
how and at what pace, because it's more like a book and less like TV, and
making it more like TV is not improvement.

This is not against the article per se, which I found very very interesting
and thorough, just something I have to get off my chest in general. Though I
really disagree with the article when it comes to the facebook feed, I think
that should serve as an example for what _not_ to do.

Please, think twice, and never be too proud to get rid of a shiny gimmick when
it turns out it doesn't actually improve anything. Let's not sleepwalk to a
baseline of stuff we just do because everybody does it and because it's
technically more complex. As Einstein said, anyone can make stuff more complex
:P

~~~
benjoffe
I would argue that in the case of Facebook's updating feed that the idea is
fine, the implementation is buggy.

What should happen is the new content is loaded above and instantaneously the
document scroll position is updated to preserve the previous position. This
will allow new content to come in without disrupting your experience.

~~~
PavlovsCat
That's fine for a few updates, what if you have a lot? Memory may be cheap,
but why not use it for interesting things? I mean, what problem does an auto-
updating feed or endless scrolling solve that are easily dealth with by
pagination, expire headers and manual refresh? Other than "user stickyness is
not 100% yet" I mean, which I don't recognize as a valid problem.

------
andrewstuart2
A single page app is just this: a web page that doesn't ever reload the page
or reload scripts. It doesn't matter how much content the initial page had on
it, and it certainly doesn't mean that you send an empty body tag. The history
APIs even let the back button and URL bar behave exactly as the user expects
them to, but without a single round trip if you already have the data and
resources.

While it's certainly true that the first page may load slower, and you'll load
a few scripts as well, you never need to reload those again. Frameworks like
Angular encourage you to use a "service" mindset that capitalizes on this
property.

The longer you use a single page app, the fewer round trips you will have. If
you ask me, your communication should only be for raw materials (scripts,
templates) that you won't need to validate or request again during the current
session, and raw data (json). This is more loosely coupled, more cacheable at
all the different levels, and more scalable in large part due to the
decoupling.

Once the initial view loads, I totally agree that you should intelligently
precache all your resources and data asynchronously in the background to usher
in the era of near zero-latency user interactions. Preferably, you do this in
an order based off of historical behavior/navigation profiling to best use
that time/bandwidth you have before the next click.

I get the impression articles similar to this one that there was once a
similar mindset surrounding mainframes and dumb terminals. The future is
decentralized, web included.

~~~
Offler
I think people have different use cases that they aggregate under "web
application". If you are building a desktop replacement application then I can
see that initial load might not be such a big deal but if you are building a
less complex application like the Twitter UI then server side rendering makes
more sense. It's all context specific in the end. To hard to make general
claims.

~~~
andrewstuart2
Actually, I think twitter is a prime candidate for a JSON-driven client-
rendered application. It's a pretty static shell, with a static tweet template
iterated over large amounts of tweet data.

You could completely eliminate a huge number of round trips by just getting
tweet data and user data after the initial pageload. The user data is quite
cacheable and you could create a single endpoint that allows the user to
precache the required user data (display name, photo url, id, etc) for
everyone he follows in one round trip.

~~~
ehsanu1
[https://blog.twitter.com/2012/improving-performance-on-
twitt...](https://blog.twitter.com/2012/improving-performance-on-twittercom)

They moved away from client-side rendering. Quoting:

 _There are a variety of options for improving the performance of our
JavaScript, but we wanted to do even better. We took the execution of
JavaScript completely out of our render path. By rendering our page content on
the server and deferring all JavaScript execution until well after that
content has been rendered, we’ve dropped the time to first Tweet to one-fifth
of what it was._

~~~
andrewstuart2
I'm not a Twitter historian, so I may be wrong, but I'd be willing to bet that
their client side rendering allowed them to scale the way they did. When your
customers' computers are doing half the work or more, adding a customer costs
you half as much or less.

------
protonfish
This is not a great article and is resistant to criticism by virtue of its
excessive length. Still, I'll try to point out some major flaws.

1\. Single page apps have many drawbacks that are conveniently not mentioned:
Slow load time, memory bloat, persistence of state on reload, corruption of
state and others. SPAs are just another misguided attempt at making web apps
more like desktop apps. Web apps are network applications - if you remove the
network communications portion, what is the point of them?

2\. JS is a great tool to make pages more responsive. This has been the case
for years and I am lost on why the author writes on and on about it without
any poignant observations or facts.

3\. Using push (web sockets) is a valuable tool for accomplishing particular
features. This does not mean that more is better and we should start using it
for everything. Server pull is a strong feature of the web and is arguably a
key to much of its success.

4\. Ajax is great, no argument.

5\. Saving state as a hash value in the URL not only puts JS actions into
history, but makes them visible and bookmarkable as well. Push state is a
quagmire.

6\. The need to push code updates is one of the problems caused by SPAs that
is not needed in normal web apps. Even so, this could be solved with a decent,
as yet unimplemented, application cache.

7\. Predicting actions is overkill. If you focused on doing everything else
well, there is no need to add significant amounts of complication. More code =
poorer performance and decreased maintainability.

~~~
darkmarmot
I agree with all of your points but number 1. I think a good SPA should be a
network-reliant application whose complexity is demanded by the use case. And
a good framework for one should have provide near instant load times: a small
core library with additional resources and logic only loaded on demand.

~~~
protonfish
I think of good applications as a collection of SPAs. There is no benefit to
forcing every single feature of an app into only one page. Ajax means we can
implement more functionality without a page reload, which is good to a degree,
but at a cost of slower initial load, losing URL state, and difficulty tracing
UI behavior to underlying code.

I am working on a legacy SPA now and it's horrible. (Our users have learned to
refresh frequently due to uncertainty of state.) I am not sure how much to
blame on poor implementation as opposed to inherent weaknesses of SPA
architecture.

~~~
darkmarmot
Almost all of the SPA frameworks out there right now are nightmarish. I think
React is pretty good but limited. What you're describing is probably a much
better way to handle the needs of most sites.

------
mwcampbell
Is there any web application framework, presumably encompassing both client-
side and server-side code, that implements these principles? I'm guessing that
Meteor comes closest.

~~~
nateps
DerbyJS ([http://derbyjs.com/](http://derbyjs.com/)) is built specifically for
these principles. I'm working on big updates to the documentation right now,
but I can say that we have a great solution for Server HTML + Client DOM
rendering, realtime data updates, and immediate optimistic updates on user
interaction in the client.

By building on ShareJS, DerbyJS uses Operational Transformation for
collaborative editing of data in realtime, which means that users can see data
updated immediately in their browser even if other users can edit the same
thing at the same time. This is the same approached used by Google Docs.

~~~
nateps
Also, here is a video from a very informal talk I gave on similar concepts
last year at our office:
[https://www.youtube.com/watch?v=iTC5i63eOzc](https://www.youtube.com/watch?v=iTC5i63eOzc)

------
pothibo
Here's a few thoughts:

I can see that in terms of bandwidth, SPA can be more efficient then normal
HTML page. But this makes a few assumption. First, that your JS package never
changes. As soon as 1 character changes in your package, the cache is
invalidated and the whole package needs to be downloaded. Like you said, it's
application specific. But if your app has ~3 pageview per session, it becomes
very hard to justify the use of a SPA.

As for acting as soon as there's a user input, this can be done with SPA or
not. One thing to mention though, is that Pull-to-refresh is something that is
gradually falling out of favour.

Besides those 2 things, insightful post.

~~~
quaunaut
> I can see that in terms of bandwidth, SPA can be more efficient then normal
> HTML page. But this makes a few assumption. First, that your JS package
> never changes. As soon as 1 character changes in your package, the cache is
> invalidated and the whole package needs to be downloaded.

Sure, but there's strategies against this, right? Generally, vendor and 3rd-
party code doesn't change often, so minify and stick all that together. Then,
you've got your core application code, which you attempt to keep as small and
fast as possible.

I will say, I'm not as experienced as the author of this piece, but at the end
of the day I feel like the author is making blanket-statements that honestly
don't hold up to the reality of what users actually want. I think it also
makes assumptions about your stack, and your resources- yes, if you've got
incredibly fast, top of the line servers, server-side rendered pages are
probably a better idea, as the time difference between a json payload and the
page being rendered by the server is much smaller.

On the other hand, even a cheap rails(ie slow) server with a CDN handing off
the client code can shove some JSON out no problem, and it can do it _very_
fast, even the worst-off users usually only at 300ms for total receive time- a
time which generally, is 100-200ms slower than your average server's render
time of the page alone.

Furthermore, it lets you offload who is delivering said content- if a CDN is
giving up all that Javascript, then the initial render times may actually not
be that much slower than if it was server rendered.

\----

I also get the feeling a lot of people are making the mistake right now of
assuming that because there's been a lot of evolution in the frontend
framework world in the past 2 years, that it also means we're hitting peak
performance, which couldn't be further than the truth. Angular apparently(I'm
going off of what I've seen many say about 1.0 in the post-2.0 world)
completely bungled performance the first time around, but it'll be better next
year. Ember is already well on its way to being fast, and by summer of next
year is going to be blazing quick with all of the HTMLBars innovations.

I think we're barely getting started figuring out frontend frameworks. Even if
right now it may not be the best idea for your personal use case, I'd check
back once a year until the evolution slows down to make sure you don't end up
regretting not jumping in.

------
Illniyar
There are a few issues with these:

1) "Server-side rendering can be faster" \- the information in this part
quietly ignores the fact that:

    
    
      * even if you have server-side rendering, you are still going to load external javascript/css files
    
      * browsers optimize multiple resource loading by opening multiple concurrent connections and reusing connections
    
      * you can and should use cdn (hence actually lowering the 'theoretical' minimum time)
    
      * browsers cache excessively - and you can make them cache even for longer
    
      * the fact that rendering on the server-side takes a lot of cpu and hence increases response time dramatically the more requests are made
    

6) while reloading the page when the code changes is a good idea, hot updating
javascript is a really bad idea - beyond the fact that it's terribly hard,
will most likely result in memory leaks in the end and as far as I know no one
is doing it, it'll be extremely hard to maintain or debug.

The rest of the principles are quite true, informative and should be practiced
more often (assuming you actually have the time to engage in these kind of
improvements as opposed to making more features).

~~~
valisystem
Just to nuance your 1) points about server side rendering, with pro-points :

• you only need html and css loaded to show content to your user, and js loads
while the user is watching content, js has some time to be ready on first
interaction.

• still feels slower than showing stuff with only html+css

• for pages content that changes a lot, if you rely on cdn for html pages, you
need to update content with js on page load and you either ends up with a
splash wait-while-we-are-loading or a blinking christmas tree.

• if your html is small enough, the cache checking round-trip is not that
faster than loading content, while a JS rendering will need cache round trip
AND data loading round trip. You can eliminate some html round trip with cache
expiration, but at the expense of reliable deployments.

• still, JS rendering/update can be slower than server side CPU, especially on
mobile devices.

~~~
Illniyar
There is a very simple way to get both maximum cache (without cache round
trip, I.E. the ETag or Older-then) and reliable deployments - use version
identifiers on the url and no expiration cache.

The only thing that the browser should always load is your base html, and have
a single linked js/css that is concatenated and compressed, whose url changes
every deployment - most web frameworks already have a way of doing it (Rails,
Django etc...).

------
glifchits
Wow, the views counter... Haven't read the article yet, just astonished at the
rate of increase...

~~~
beenpoor
Do you know how the counter works ? I am a JS noob. I see at the bottom of
script he's updating the counter, but who is calling the update ?

~~~
Rauchg
It's a WordPress website that communicates with a Socket.IO server. I wrote
about how to accomplish this here: [http://socket.io/blog/introducing-socket-
io-1-0/#integration](http://socket.io/blog/introducing-socket-
io-1-0/#integration)

It's true to the spirit of the post as well: the count gets rendered on the
server, then reactive updates come in realtime and the view is updated.

~~~
elwell
Love your work, but the yellow flashes did encourage me to scroll that counter
above the fold before I started reading. I get distracted easily.

------
aliakhtar
> Server rendered pages are not optional

> Consider the additional roundtrips to get scripts, styles, and subsequent
> API requests

If you're using a framework like GWT, it compiles all of the relavant css
files, javascript, and ui template files, into one .html file. Then there's
only one or two http requests to download this html file, and the server only
has to handle requests for fetching data, updating or adding stuff, etc. You
can also gzip + cache this .html file, to make it even smaller.

It runs lightning fast, too.

~~~
megaman821
This isn't as awesome as it sounds. When inlining everything there is no
control over the prioritization of resources. Large files like images can
block the rendering of your layout giving the appearance of being slow even if
the overall download time is less. HTTP2 has stream prioritization to solve
this problem and is much more cache friendly.

~~~
aliakhtar
> When inlining everything there is no control over the prioritization of
> resources.

You can use split points to divide up your code, only the resources in a given
split point are loaded. Example: If the user is viewing your 'Sign up' page,
then only the resources for the 'Sign up' page will be loaded.

> Large files like images can block the rendering of your layout

Only small to medium files are inlined. Large files are downloaded as usual.

------
Cyranix
Seems like a well-reasoned set of opinions at first blush. I'll have to give
it more time to sink in for the most part, but the one bit that elicited
immediate disagreement from me was the particular illustration of predictive
behavior. There is unquestionably value in some predictive behaviors (e.g.
making the "expected path" easy) but breaking with the universal expectations
of dropdown behavior doesn't seem like a strong example to follow.

------
EGreg
Funny enough I've had to deal with many of these when implementing
[http://platform.qbix.com](http://platform.qbix.com)

I pretty much agree with everything except #1. Rendering things on the server
has the disadvantage of re-sending the same thing for every window. I am a big
fan of caching and patterns like this:
[http://platform.qbix.com/guide/patterns#getter](http://platform.qbix.com/guide/patterns#getter)

You can do caching and batching on the client side, and get a really nice
consistent API. If you're worried about the first load, then concatenate all
your js and css, or take advantage of app bundles by intercepting stuff in
phonegap. Give the platform I built a try, it does all that stuff for you,
including code updates when your codebase changes (check out
[https://github.com/EGreg/Q/blob/master/platform/scripts/urls...](https://github.com/EGreg/Q/blob/master/platform/scripts/urls.php)
which automagically makes it possible)

I would say design for "offline first" and other stuff should fall into place.

------
quarterwave
For real-time updates _in response to user actions_ , which is a bigger
concern: average latency, or its variance?

Example: Server generates a sine wave which gets displayed as a rolling chart
waveform on the client. As client spins a knob to control the amplitude, the
server-generated stream should change (sine wave is a trivial example,
representative of more complex server-side computation).

~~~
lambeosaurus
The real-time updates he's talking about don't require server-side processing
- the google homepage switching immediately to the search view for instance -
that processing can be contained within the Javascript application, and state
is simply maintained against the server (and then by extension across other
instances of the application).

I don't imagine he's suggesting we try the same approach where server-side
processing is required.

If I have misunderstood you then I apologise.

~~~
quarterwave
You're right, and your explanation helped me understand the article better
when I read it again. Thanks.

------
einrealist
I really like the simplistic principles [http://roca-style.org](http://roca-
style.org) defines for web applications.

I find single page applications way too complex. The amount of code
duplication is horrific. So everyone ends up building platforms like GWT or
Dart in order to hide that overhead. But that does not mean that things get
simple.

(Maybe I'm getting old.)

~~~
pluma
I can see where you're coming from but I find that React (with node on the
server and a RESTful database) eliminates a lot of the code duplication
because I can run the same view rendering logic on the client and the server.

ROCA is an appealing idea, but my concern is that in order for the the-API-is-
the-web-client approach (which ROCA as I understand it seems to advocate) to
work you end up mixing two entirely separate levels of abstraction: what may
be a good abstraction on the API level may not be a good abstraction on the UI
level. It's sufficient if your web app is just an API explorer, but not every
app lends itself to that.

You could say that then we shouldn't be building those apps, but that's simply
not realistic.

------
daigoba66
A neat example is github.com. When browsing a repository it refreshes only the
relevant part of the page. But the URL changes and can be used to navigate to
a specific resource.

But as the article points out is often the case, at github.com the HTML loaded
does not include the already rendered resource; it must be pulled in via a
separate request.

~~~
lucaspiller
The way GitHub works is pretty decent, but also pretty basic. It uses PJAX, so
the HTML is still rendered on the server but the body content is updated.

It still has a few issues though, I work on flakey connections now and again
and sometimes it just gets stuck - it would be nice if the request were
retried automatically after a few seconds.

------
jfroma
> "Server rendered pages are not optional"

I don't get this, in my opinion they are optional, you can show the ios png
placeholder (shown in the next item) which is a very static and cacheable
content, while fetching your highly dynamic data from a database or somewhere
else.

It feels like the first principle contradicts 2, 3 and 4.

~~~
jamesbrewer
The solution you offer is exactly what server rendering is meant to stop. You
should load content as fast as possible and that means rendering it on the
server.

Please stop putting loading icons and spinners where your content should be.

~~~
cbsmith
I hear you on this, but the notion that client side is inherently a bigger
download is kind of crazy, no?

Heck, if the concern is really about having lots of round trips, rather than
server side rendering, you could have the server side stitch the client side
components and still allow client side rendering. In fact, doing it that way
makes it a heck of a lot easier to avoid reloading the entire page each time.
Some kind of weird disconnect here.

~~~
jamesbrewer
It depends. I've worked at a company where the compressed and gzipped
JavaScript file was over 1MB because there were multiple libraries being
included for the use of one function.

That's obviously an extreme example.

Ultimately it's the engineer's job to make good decisions.

~~~
cbsmith
> That's obviously an extreme example.

No, I think that's missing the point. Sure it could be larger, but presuming
you are trying to optimize the experience, there is nothing that would require
doing it server side.

Why would the total payload needed to render the page client side _have to be
larger_ than if it were client side? Unless you are talking about rendering an
image with client side logic instead of sending a PNG/JPEG (in which case,
sure, but that isn't what most people are talking about), I can't quite see
it.

------
darkmarmot
Just one thing to point out: it seems as if a lot of your SPA arguments are
predicated on the idea that apps don't chunk and/or stream their logic. While
the front-end SPA framework I use is currently pretty bad for SEO, almost none
of the download or latency issues are applicable...

------
dllthomas
I think these are great ideals to strive for, but they seem lower priority
than a couple things that they can get in the way of if you're not careful.

First, in your quest to show me the latest info, _please please please_ don't
introduce race conditions into my interface. I don't want to go to hit a
button or a key or type a command, and have _what that means_ change as I'm
trying to do it.

Second, it's often important to me what has happened locally vs. what is
reflected on the server (especially if that's public). Please _do_ update the
interface optimistically in response to my actions rather than sitting and
spinning, but please _also_ give me some indication of when my action is
complete.

------
MrBra
"A slightly more advanced method is to monitor mouse movement and analyze its
trajectory to detect “collisions” with actionable elements like buttons."

Is this a joke?

~~~
jamhan
Read this: [http://bjk5.com/post/44698559168/breaking-down-amazons-
mega-...](http://bjk5.com/post/44698559168/breaking-down-amazons-mega-
dropdown)

for another useful example of this technique.

~~~
MrBra
This is totally different and makes sense. You simply observe if mouse moves
off within a certain angle to not make a submenu disappear. This is cheap and
useful. It helps the submenu not disappear when you are moving to it, not
having to move the mouse exactly along the tiny space that connects the two
menus. This can actually save like 10 seconds each time (inexperienced users)!

The other one will ajax preload a dropdown content when it detects that
current mouse trajectory is in line with it. Come on.

------
derengel
When you are developing a web application for phone, tablet and desktop, is it
a good principle to use the same HTML for the three and a separate CSS for
each device? is there a case where this would cause problems?

~~~
bliti
It depends how this is setup. If you are using a template engine then I don't
see why would this be a big deal if its a technical decision followed
throughout the project. If you are not using a template engine and using
Javascript to throw things around (like a bunch of Jquery piled on top of each
other) then it becomes an issue.

Are you using any kind of server side framework (Like Django/Rails)?

Are you using any kind of client side framework (like Angular)?

Are you using any kind of layout framework (like Bootstrap)?

------
moron4hire
Can we first discuss design 101, i.e. don't put blinking elements in the
user's periphery unless it's something really super important? The page view
counter not being such a thing.

~~~
plainOldText
It could be something important. I don't know the exact intention behind this
particular view counter, but consider for instance the scenario of using such
a counter as an element aimed at enhancing credibility and establishing
trustworthiness. You'd be more likely to read an article knowing it was also
read by other 100K people, wouldn't you?

