
Asynchronous UIs - the future of web user interfaces - maccman
http://alexmaccaw.co.uk/posts/async_ui
======
crazygringo
For truly trivial things like upvoting comments, fine.

But for sending an e-mail? Not in a million years. I want to see the spinner,
and then know that it was actually _sent_ , so I can close my web browser.

E-mails can sometimes be terribly important things.

If my e-mail web app always instantaneously tells me "sent!", then I never
have any idea if it actually was -- how long do I have to wait to know before
it tells me, "sorry, not sent after all." What if the app doesn't get back an
error code, but the connection times out? What if the app doesn't implement a
timeout?

Basically, if I don't get a real, delayed, "sent" confirmation, then I know
there was a problem and can investigate or try again. But if I get an
instantaneously "sent" confirmation, and then don't get a "sorry, there was a
sending error" message, I can't be 100% confident that the data actually got
to the server, because maybe there was a problem with triggering the error
message. And since I'm a web developer, I can imagine all SORTS of scenarios
that a programmer might not account for, then would prevent an error message
from being displayed.

~~~
noahl
But there's a third choice, which I hope is the sort of thing he meant:

as soon as you send a message, it goes into a little list on the side of your
screen of things that are transferring to Google's servers. You can see it
there, and you will see it go away when it has been transferred, so you know
what's going on. But in the mean time, you can go back to your inbox, look at
other emails, or do whatever else you want. That's how an asynchronous
interface should be done.

One thing I didn't notice the article mentioning is that it's possible to have
blocking only for certain parts of an interface. So if you press a "load
picture" button, then maybe a gray square with a spinner will appear, but the
rest of the interface should continue working as usual.

~~~
stingraycharles
But at that point, you still have a blocking call to put it on the queue at
Google's servers. Which can just as well be, well, a mail server (which
maintains a queue of itself). So adding another layer of abstraction on top of
it kind of defeats the purpose.

~~~
iradik
Interesting point, but what happens if you click "send!" and then close your
laptop. Is your message sent or not?

While your point about the server-end being a queue is true, there's an
expectation once your message is offloaded onto Google's queue, they will
reliably process the message in a reasonable amount of time.

------
wrs
Ah, thick clients are coming back again, and now we've reached the point where
people start trying to build asynchronous applications because they're
frustrated with choppy UI.

Unfortunately, pretending the network isn't there doesn't make it so. The
flakiness has to come out somewhere, sometime. Either you make the user wait
now, or you explain later, after you've lied about what you did. It's a tricky
tradeoff.

Let's fast-forward to the end of the movie: You'll end up with a zillion
special cases that are impossible to test properly. You'll decide to restore
sanity by replicating the data into a client-side store with low latency and
high reliability, so you can go back to a synchronous UI that your developers
can reason about. All the craziness will be in a background process that syncs
the client and server stores, which will still have to cause weird behavior as
reality demands it, but at least the logic is contained. (I just described an
IMAP mail client, or--for a Normandy-invasion-scale example--some versions of
Microsoft Outlook.)

Then a new thin client platform comes along where you can't do all that
complicated client-side stuff. The cycle repeats.

~~~
rsanheim
Exactly. Everything old is new again.

There are significant costs for real world apps to what the OP is suggesting,
and you can't abstract them away in a framework or library, as much as you may
wish it to be the case.

See also:
[http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Comput...](http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing)

------
jashkenas
Nice post. I'd like to briefly respond to the bit about the difference between
Spine, which generates pseudo-GUIDs for models created on the client, later
overwriting them if the server responds with a real id; and Backbone, which
has a "cid" (client ID) for every model regardless of the canonical server ID.

The reason why Backbone provides a persistent client id for the duration of
every application session is so that if you need to reference model ids in
your generated HTML, you _always_ have something to hang your hat on. If I
have '<article data-cid="c530">' ... I can always look up that article,
regardless of if the Ajax request to create it on the server has finished or
not. With Spine's approach: '<article data-
id="D6FD9261-A603-43F7-A1B2-5879E8C7926B">' ... I'm not sure if that id is a
real one, or if it's temporary, and can't be used to communicate with the
server.

Optimistically (asynchronously, in Alex's terms) doing client-side model logic
is tricky enough in the first place, without having to worry about creating an
association based off a model's temporary id. I think that having a clear line
between a client-only ID and the model's canonical ID is a nice distinction to
have.

~~~
danmaz74
Couldn't you just allocate a pool of IDs to each session (open client), and
let the client generate real, unique IDs directly using them? This way you
wouldn't need collision detection, synchronization, etc. You only need a big
enough IDs space, or a way to reuse IDs of the pool that weren't actually used
by the client (by including them in new pools).

~~~
nazar
I am afraid that would create a mess in a db(its just my guess). Though your
idea sounds nice to my ears. The hardest part would be if session runs out of
id-s I guess. One can never allocate optimal pool of IDs for every user, there
are always going to be "bad cases".

~~~
danmaz74
If the id space is big enough, not having optimal pools wouldn't be a problem.
And if you are near the exhaustion of a pool, just request a new one...

------
gfodor
The author managed to pick the worst possible example of a site 'doing it
wrong.' First, GMail practically _invented_ the asynchronous UI, you'd think
they know what they're doing. And, of course, they do. The reason it blocks
when you send an e-mail is because that way you can be sure the damn thing was
actually sent.

~~~
tlrobinson
Have you ever tried searching/labeling/filtering large amounts of email in
Gmail? I did last week, and it was pretty frustrating.

------
patio11
I love the feeling of immediacy users get when using AJAXy applications over
render-view-submit-rerender applications, and my users actually comment on
this to me (not in as many words, but they say that it is "light", "fast",
"easy to use", etc), but the development costs of going the extra mile to
asynchronous strikes me as likely to be very high indeed. It already costs me
about ~5x development time to do something client side versus server side,
just because of how much time wiring up Javascript takes. (And praying it
doesn't break, because Javascript is orders of magnitudes harder to test than
Ruby is.) The costs for rewriting the entire app to exist simultaneously in
the browser and the server, and to magically never fall out of sync even when
users do something user-y, scares the heck out of me.

The whole toolchain for reasoning about stuff happening in the browser is
still laging a few years behind what we have on the server, which is a related
but larger problem. We have Firebug, which gets us truly revolutionary
features like "output log messages... in a browser!" and "inspect the internal
state of objects in memory... in a browser!" But many of the rest of the
cutting edge developments from the 60s and 70s haven't quite made it to the
browser yet, or they're not yet at the point where they can be used by
mortals. (Selenium: I want to love you, and yet I can't actually use you for
anything because you break my brain.)

~~~
marknutter
And then some young-gun will come along and create Ruby on Rails for the
client side and all these frustrations will be abstracted away. Something
being hard to do doesn't mean it shouldn't be pursued.

~~~
hello_moto
It's kind of hard to agree with your statement albeit it has some merits.

Rails is a glue of a bunch of things. It's not like it doesn't exist in other
environment.

Meanwhile, the state of JS testing is still far behind. Like really really far
behind.

Once you get that sorted out, you still left with common set of libraries.
Still a long way to go really.

Not to mention that the whole client-side thing is ripe for a change
(depending on whom you talk to).

------
yuliyp
I really dislike the attitude of "errors are rare, so don't spend much time on
them" espoused by the article. Errors are rare in the sense that you will
often miss them, but most of your users will run into them.

Let's say your AJAX requests have a .1% chance of failure. If your users
perform a thousand actions each on average, then 50% of your users will have
been exposed to your error flow. Hope it's better than "Sorry, an error
occurred."

Individual errors are rare compared to successes. Overall errors happen all
the time.

~~~
wanderful
I wouldn't say he's saying spending less time on them. He's saying that they
are the exception, so don't make the whole interface depend on the possibility
of them. Proceed assuming the optimal case and when they do happen, provide a
safe, friendly means of dealing with them.

------
corin_
The highlight of this article for me is:

    
    
      Amazon: 100 ms of extra load time caused a 1% drop in sales (source: Greg Linden, Amazon).
      Google: 500 ms of extra load time caused 20% fewer searches (source: Marrissa Mayer, Google).
      Yahoo!: 400 ms of extra load time caused a 5-9% increase in the number of people who clicked "back" before the page even loaded (source: Nicole Sullivan, Yahoo!).
    

Answered my own question but will leave for anyone else interested: but does
anyone have the sources for those facts?

edit: Original source for Amazon stat (possibly also Yahoo, or possibly it's
just referenced) is a powerpoint by Greg, downloadable at
[http://7303294208304035815-a-1802744773732722657-s-sites.goo...](http://7303294208304035815-a-1802744773732722657-s-sites.googlegroups.com/site/glinden/Home/StanfordDataMining.2006-11-29.ppt)

edit 2: The Google stat is from a speech at a 2006 "Web 2.0" conference,
referenced by Greg at [http://glinden.blogspot.com/2006/11/marissa-mayer-at-
web-20....](http://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html)

edit 3: Yahoo stat from Nicole's presentation here:
[http://www.slideshare.net/stubbornella/designing-fast-
websit...](http://www.slideshare.net/stubbornella/designing-fast-websites-
presentation)

------
jpastika
I recently used several of the techniques described, but I carefully chose
when and where to implement them. For example, when a user "deletes" an item,
rather than removing anything from the DOM before the request, I hide the
appropriate elements, send the request, and if successful, remove DOM
elements. The advantage of this approach is that the UI feels snappy, but it
is easy to fall back if something goes wrong. Being optimistic that things
will "just work" is alright in a fairly controlled environment, but when
mobile is introduced, a mix of optimism with a soft fallback is a good
approach.

~~~
huhtenberg
From the UX perspective this approach is still wrong. Any sort of operation
that may fail needs to provide an intermediate indication of an in-progress
activity.

For example, if an item is updated and the backend balks, but in 10 minutes,
there is no clear and concise way to indicate this error to the user _unless_
the item was marked as "in-progress". If the backend is normally snappy, then
it might make sense to delay showing the in-progress indicator (so that the
majority of users won't ever see it), but discarding it altogether is not a
way to go.

Another example, say there is a list of items keyed by a _name_. I delete A,
then rename B into A, and then the deletion of A fails. Ain't that a pretty
mess to shovel yourselves out of?

That's not to say that there aren't certain UIs that could be made to work in
"instant" fashion, but realistically there's just not very many of them there.

~~~
jpastika
You make some good points, but I don't think the approach I describe is wrong,
unless like you said, I allowed requests to go on indefinitely. In the
scenario you describe, it certainly makes sense to block the UI if editing
uniquely identifying information is co-mingling with delete actions. I think
the point we are both trying to make is that creating responsive applications
should not be at the expense of the user's understanding of what is happening.
This is a difficult balancing act, but perception is an important part of the
UX, and whether we like it or not, if using an application feels faster it
will generally be perceived as a better experience.

~~~
huhtenberg
> ... _it certainly makes sense to block the UI_ ...

There's no need to block the _UI_. It is perfectly sufficient to disable just
the affected item.

> ... _creating responsive applications should not be at the expense of the
> user's understanding of what is happening._

In this case you are bound to repeat the Microsoft's Distributed COM fiasco.
They tried to blur the line between accessing in-process, in-machine and over-
the-network services behind an abstract API. It was nice in theory, but
practically it was a disaster. It is really hard to write a meaningful app -
even an asynchronous one - when an API call can take between few ms and
several seconds to complete.

In case if the parallel is not clear - their idea was the same as yours -
"devs need not to know what's happening". This does not work. Devs need to
know, as do users in your case. Perception is indeed an important part of the
UX, no arguing there, but the UI needs to be designed in a way to preclude
them from making false assumptions that would prove frustrating and disastrous
should the backend go kaput. Faking snappiness does the opposite, it makes
believe.

------
weixiyen
There's something to be said about user confidence and a user's confidence
level directly correlating to their productivity using your app.

I could not have thought of a worse example than removing a progress indicator
from sending an email. Making an "async UI" work in a fluid way that provides
confidence to the end user is much harder than simply changing the state
immediately and hoping that 99% of the time, it works.

Error handing can be a pleasant experience if done correctly, and in this blog
post it's just an afterthought.

Here's a better way to do it:

\- I click "Send Mail"

\- My UI changes as if it were sent, allowing me to do other things in the
meantime.

\- I receive a growl notification in some other part of the UI that tells me
the email has been successfully sent.

\- If 1 second has gone by and I did not receive a response from the server to
confirm that the mail has been sent, I will see an indicator that tells me
that the sending is in progress, where the growl indicator would have been.

\- If it is an error, the indicator changes and allows me to click it to go
back to the mail composition view.

The concept of providing perceived performance is not new, but the details are
in the execution, and you will shoot your self in the foot if you don't cover
all the little details that are required to make something like this work.

Otherwise, some company is going to implement some jarring async UI
incorrectly and piss off a lot of users.

Yes, blocking a UI is bad, but notifying the user of progress and task
completion is a very good thing.

------
azov
There's a difference between a non-blocking UI, and the UI that hides progress
of an operation that actually takes time. I'm all for non-blocking UIs, sure,
let me do other things while I wait. I'm not so crazy about hiding progress.
Call me a control freak, but I do want to see that the action I requested is
actually completed, not just appears to be.

~~~
jeromegn
I don't think that's true for most users. Coders might think of it that way
because they understand the complexity of apps.

When an average user completes an action and sees the results instantly, he's
not wondering if something went wrong, or if something is ongoing. They've
already had UI feedback suggesting a successful result.

~~~
wvenable
But if the operation fails but appears to succeed the user has an even bigger
problem.

~~~
jeromegn
That's not an issue with the UI.

It goes without saying that mechanisms in the back-end need to be implemented
in order for AUI to provide a great user experience.

For instance: \- On error, an action should be retried. \- Long-lived
processes should be queued and upon failure, requeued.

Hopefully, the user won't reload the whole javascript app before this action
is successfully completed and should never notice it failed in the first
place.

It's not perfect, but I find that it is better. There's definitely work to be
done on the error handling front. The UI should be able to clearly notify the
user when something he previously did, didn't work.

~~~
wvenable
You're saying "hopefully" the user won't visit a different site in the same
tab or close down the browser entirely after completing an operation. That's
quite the hope.

Merely notifying the user, long after, that an error occurred and his work is
not saved is hardly sufficient. Even assuming that the user doesn't leave your
app, they could easily be off doing something else miles away from that
operation.

~~~
wahnfrieden
You do know that you can show the user a prompt warning them that they'll lose
unsaved data if they close the page or go to another URL, right? That solves
this issue.

~~~
wvenable
Then what happens? How long is the user expected to sit there waiting for
unsaved data to save that might never save?

And what about the second point; How useful is it to alert the user that
something they thought was done wasn't long after they've stopped caring about
it?

~~~
wahnfrieden
That's a UI issue. You can show a more prominent loading progress indicator if
they elect to stay on the page after trying to leave it, so that they know
when it's safe to exit.

And on the second point, presumably you'd always show some kind of indication
that the message is sending, just not one that blocks the rest of the UI.

This is how desktop mail clients I've used work. The sending indicator is
small, but if I try to exit before it's finished sending, it blocks the exit
and alerts me about it.

~~~
wvenable
The problem with using email as an example is that it's too perfect. Everyone
is comfortable with the concept of the outbox where messages go to be sent.
Email messages are fully self-contained and independent from every other
message. And while in an email client, all you do is send and receive
messages, so any UI related to that is fully expected.

The real question is how would this apply to applications slightly more
complicated. An app where operations have consequences beyond just the single
item you are working on; were users are clicking "save" and not "send"; and
users go from working on entirely different entities. It seems like this adds
a lot of additional complexity for very little perceived performance gain.

~~~
wahnfrieden
Obviously you need to weigh the gains vs the development time required for
adding and maintaining this, as with anything.

In terms of the user experience, if the perf gains are small enough, then the
chance that the user tries to exit while a request is in-progress should be
slim, so interrupting their exit is fine as an edge case, and the gains in
responsiveness should be weighed against your core user or business metrics -
several hundred ms extra delay per action _can_ have a significantly negative
impact.

If the gains are large enough that the user is likely to interrupt something
when exiting, then blocking the entire UI for each request is a terrible
experience and you need to do something about it anyway.

------
zv
Nice idea, definately not new. There is one major problem with this approach -
you save document, request processing, you navigate away from page, start new
work, after 30 secs your request failed. Now the code complexity for you to
handle this situation is high. Multithreaded/asynchronous systems are always
hard.

~~~
phzbOx
The author specifically talks about that in the article saying that you can
catch "leaving a page" and notify the user that there's still something
pending and data would be lost.

And in the case of a big error, you can either refresh the page.. or clear the
models and resend them in json format to stay sync with the server. It might
be a little more complicated, but with a good framework, not _that_ much more.
I.e. multithreaded code is a pain.. asynchronous or not, it's a pain. And, in
the rare case where you really need to wait for the server to answer back,
well, use a loader. But there's a difference between using a loader when you
absolutely need it, and everywhere. Think of how apps work on the desktop..
everything is lighting fast, but in some rare occasions, it blocks. What is
better? Something that always block or something that sometime block?

~~~
gurraman
Does it also prevent the browser from crashing if there's still something
pending? :)

~~~
phzbOx
No, but that's not the point. If the web page is telling you "Wait, your email
is being sent" and you close your browser, do you expect the email to be sent?
Or if it says "Please wait, the document is being saved" and you close the
application..

I.e. The point is that asynchronous make it feels smoother. If the browser
crashes while data is being transmitted, there's nothing you can do. Ajax,
asynchronous or whatever. So, what will happens is the data will be lost. The
asynchronous part doesn't resolve all problems.. it just feel faster for the
user.

And, by the way, why the ":)" at the end? Is it because you were happy?
Personally, I find that a bit provocative. (i.e. in gaming, people would say
"You suck :)" or if they'd crush you, they would just say :). It's being bad
manner.) But then, if you _were_ happy and just wanted to show it, sorry for
this comment.

------
callmeed
I'm curious how SEO will play into this trend of async UIs and JS frameworks.

In 2 of the example studies given (Amazon and Yahoo!), we're talking about
content/commerce sites where rankings matter.

If you reduce load time by Xms and increase conversions by Y%, your net gain
could still be negative _if you get bumped to page 3 for important searches
and lose traffic_.

Do any of these JS frameworks consider SEO and have appropriate features
built-in? (I'm thinking of things like hash fragments)

Can someone who runs a content/commerce site that cares about SEO comment on
this?

~~~
thaumaturgy
I'm about to launch a new version of my business's website today which
incorporates the asynchronous UI concept while keeping SEO in mind.

I settled on building a JS-free version of the website using the templating
system I've developed for the backend, and then loading in JS at the end of
page load which replaces and rebuilds the site into an interactive UI for
users with JS enabled.

Assuming Google doesn't try too hard to execute JS on the page, it should get
a clean, "normal" version of the site, with all text & menus and everything
else accessible, while users get something a little bit different (but with
the same content).

~~~
marquis
We do the same thing. Our CMS delivers views based on the type of agent and
whether JS is enabled or not. It allows our users who aren't able to handle
all the fancy UI stuff (we have some blind users) to use the site without
losing any functionality, much like Gmail static. With the proper design it's
really simple to extend and add content this way.

------
dpup
Gmail optimistically updates the UI in many cases, for example when starring a
message or marking read/unread. Not doing so for send was a very conscious
decision due to the severity of the failure cases.

That said, work has been done to ameliorate the problems and reduce the chance
of data loss. Check out the "background send" lab released earlier this year :
[http://gmailblog.blogspot.com/2011/04/new-in-labs-
background...](http://gmailblog.blogspot.com/2011/04/new-in-labs-background-
send.html)

------
fiatpandas
For me, when something loads too fast, I think something broke because my
brain has been wired to learn than actions through a web browser are generally
not instantaneous and take a bit of time. Even if it's just a fraction of a
second.

I really like this idea, but for some reason I think my brain would be more
comfortable with a ajax spinner appearing for 300ms rather than an instant
page load. For instance, I built something recently which loaded images on a
page via ajax calls. It happened very quickly, 50ms maybe. The loading seemed
way too fast so I actually delayed the images by about 300ms. It seemed a much
more comfortable delay, and a few of my non-developer friends agreed.

Is there a sweet spot, or am I crazy? Let's just ignore amazon and google's
data for the sake of argument.

~~~
dgeb
My solution (for an app in development) is a status indicator that shows users
when they're offline, sync'ing, or sync'd.

~~~
fiatpandas
I think something like this will be key. For me, as absurd as it sounds, the
delay is the indicator. That indicator needs to be replaced if actions are to
become instantaneous.

------
ggwicz
I liked this article a lot, so please don't think I'm being negative. The only
thing that I sort of disagreed with was _"we should optimize for the most
likely scenario"_

I disagree. 1) optimization is fragility 2) the extremes will inform the
average

The "most likely scenario" is a visitor with a fast-enough Internet connection
that a few hundred ms more won't matter.

So we should build for the extremes? Well, that's a little extreme (see what I
did there?). But if you point to stats like "5-9% hit the back button...",
that is _not_ the most likely scenario...it's, well, 5% to 9% of the
scenarios...

There's a documentary called Objectified that examined this with physical
products, check it out. I think when developing and/or designing for _speed_ ,
the "most likely" person is the least of your worries. The people still
rocking slow dial-up connections are the ones who will be impacted...design
and develop with them in mind.

One example from Objectified was a toothbrush. When they targeted extremes and
made a handle that musclebound roidheads, people with MS, and old people could
easily use (i.e. the extremes of human mobility), the "average" consumer was
more than taken aware of _and_ the extremes were satisfied.

If you develop and design for the slow browsers and the wonky old Internet
connections, or at least keep them in mind, the normal folks will be more than
satisfied (ideally).

Sorry to be so picky it just caught my eye and I felt compelled to chime in
whilst waiting for school to end...

------
padenot
For what it's worth, GMail proposes a lab feature that enable asynchronous
email sending (i.e. `Send` is clicked, and you go back immediately to the last
location, while the email is sent in background).

------
inopinatus
Whoop, I get to reuse a comment I made on an earlier article, almost verbatim:

"Now that the client is the MVC execution environment with the client-server
interaction used mostly for asynchronous data replication, plus some extra
invokable server-side behaviours, we can congratulate ourselves on having
more-or-less reinvented Lotus Notes."

------
exclipy
This is exactly the philosophy that Google Wave had, except they called it an
"optimistic" UI - it always assumes that every action will succeed on the
server side.

It solved all these problems mentioned and more - for example, it used the
operational transform algorithm to merge your changes with those of other
users on the same page and update the client state to reflect this
asynchronously. It also could continue working without a network connection -
it'd just keep queuing your requests, and when you plug in the network again,
it'd just start working again, albeit possibly with a big backlog of changes
to merge together.

These are the kinds of problems you might have to start thinking about if you
want to go down this path. Remember that Google Wave died from its own
complexity.

------
hesselink
Nice article overall, but this stood out for me:

> Again, this is an exceptional event, so it's not worth investing too much
> developer time into.

I have to disagree here. Exceptional events are exceptionally important here,
since so much progress is hidden from the user. It is absolutely critical to
inform the user of what happened, so their expectations aren't broken, and to
cleanly recover so the application is not in an incorrect state. I think this
is the most important thing to invest developer time into in an application
built in this way. Otherwise, you'll lose customer confidence do to unexpected
behavior or even lost/corrupted data.

------
towhans
I totally disagree with updating the UI BEFORE the request gets back. It's
wrong for so many reasons. They all boil down to the fact that server state is
independent from the client state.

The speed argument also doesn't hold. If requests take too long to process
then you have either problem with your API (doing something synchronously on
server side which should be done asynchronously, granularity problems,...) or
your server is freaking slow. At worst a request should take under 100ms of
pure server time. Add latency and you have 300ms.

~~~
giulivo
+1

a sync problem on the server can't be worked around on the client side. You
would end up introducing complexity in an unstable, unaffordable and insecure
client.

also, actions like filling a page with data from a db do require the client to
wait for the server to complete.

~~~
phzbOx
It's always the eternal "Good for user" vs "Good for programrers". I.e. When
creating a new language, one must make choice.. should it be easier to
read/use for the coder? Or easier to code from the developer side.

And, if we look carefully in the past, it seems that it always start with the
"Easy for coder first" -------> "Easy for user". For example, when the first
examples of _Ajax_ came out, it was really hacky and most programmers would
have never believed what they'd see today.

So, I think that you are half right with the "introducing complexity in an
unstable, unaffordable and insecure client." Maybe with the actual technology
and framework, you are right. But I'm certain that in the following
months/years, we'll go toward the road of a better UI.

And, I still believe that it's not as hard as people think to make UI update
first and update later. 99.9% of the time, the server returns "ok" or
something we already knew. In the last 0.01%, we have to choose if we really
want to make it to 100%.. but in these rare case, a hard refresh is perfectly
fine.

------
jablan
Stating that this is the "future of web UI" implies that most of us will have
to develop duplicate logic on client and server side, possibly with different
languages (as the author actually does). While he mostly talks about the
validation, it seems to me that just plain validation will not suffice - we
would have to keep lots of business logic duplicated as well. And "duplicate"
usually meaning "almost the same, but with bunch of edge cases not behaving
exactly the same way".

Am I the only one who does not like such outlook?

~~~
jaylevitt
I think the idea is that you'll develop in a framework where the client- and
server-side validation can call the same code.

~~~
nickand
I'm sorry I forgot what was broken..

------
jqueryin
From reading through the various responses on this post, I believe one very
feasible and worthwhile solution to asynchronous UIs is to maintain what has
been referred to as a transaction log somewhere in the UI for the user to be
able to see containing all requests and their subsequent status/response
message when the proper event fires. This would assume that any actionable
items would trigger immediate changes to your UI in favor of the "success"
case. It would be up to you as to whether you'd like to revert that scenario
in the scenario that a failure occurs in an event response.

This would remove the dreaded "blocked UI" scenario because everything appears
to happen instantaneously, however there would be failsafes in place when
something goes wrong (the infrequent scenario).

To me it seems more a matter of order reversal in how we handle AJAX calls
(assuming you aren't using an async/evented system).

I can, however, think of downsides. Take, for instance, a scenario where you
may have a nested tree of actionable items that may have prerequisites on the
other's completion. You could chain the events, but you might end up with a
queue unbeknownst to the end user. Worse, a failure might occur at the parent
level which leads to failures for all subsequent calls. I myself am not sure
what the good alternative to this might be in terms of non-blocking UIs.

------
lkozma
This asynchronous sending of emails sounds nice but it reminds me of the times
when I started using email some 15 years ago. I would sit at a Unix terminal,
fire up Pine, write all my emails and hit send with no delay or blocking, go
to sleep and hope that during the night some script actually succeeds in
sending those emails.

------
jtmille3
I really appreciated this post. Everything Alex mentioned in his article I
learned through trial by fire doing mobile development. Performance was
critical and UI responsiveness was a must. It was then that it dawned on me
that all the same techniques could be applied to a web application just like
Alex mentioned. Most web developers seem to get stuck in the framework rut.
All the tools and techniques are there to build something fast and responsive.

If there is one thing I can truly appreciate about what he is trying to do
with spine it's the client id generation and request queueing. This has got to
be the core of what makes good "AUI". Every developer dealing with remote
requests should have this in their back pocket. 101 stuff.

------
james33
Anyone have an opinion on if Spine is the best for this as apposed to
something like Backbone?

~~~
bryanh
Good discussion here: <http://forrst.com/posts/Backbone_js_Spine_or_other-LZy>

~~~
seiji
Obnoxious: "Comments are only visible to Forrst members. Log in or Request an
invite."

~~~
mceachen
Here's a screenshot: [https://skitch.com/mceachen/gjr84/http-forrst.com-posts-
back...](https://skitch.com/mceachen/gjr84/http-forrst.com-posts-backbone-js-
spine-or-other-lzy)

------
alex_c
Is this the kind of thinking that led to Gawker's monstrosity of a redesign
half a year ago?

------
tlack
I have some APIs I have to call that take up to 5 seconds to return and resist
caching (hotel availability, for instance). Would those delays become even
more jarring with an approach like this?

~~~
phzbOx
You pointed a good example of where this wouldn't work. However, you can still
make everything else asynchronously fast; and use a loader when it needs to
really wait for 5 secs.

For instance, take gmail. Some part might hard to use that way.. for instance,
chat communication where you somewhat need to receive the answer of the other
person to show it. However, adding labels, deleting a message, etc.. that can
all be made asynchronously.

------
n8agrin
Totally agree with the premise that user actions should provide responses
instantly, I was building those kinds of responsive UIs 2 years ago. But, I
have a problem believing that the future of web applications is based on
serializing all ajax requests and duplicating model validation on the client.
Come on, this is 2011, this technique isn't new. Let's work on things that
will really change the state of the art.

------
donpark
'Asynch UI' has its uses but, in case of email, I don't think benefits to
users have much substance. Yes, perception is a critical design factor we must
all deal with on daily basis but we shouldn't forget that 'magic show'
entertains at best and offers no real value to users. 'Magic' by another name
is hoodwinking and can easily induce confusion and anger when misapplied.

------
borismus
Great post however many things don't fit the pattern. For example, lacking
precognition, your search app can't know what people will search for. There
are many similar UI examples, where you can't do stuff until you get input
from the user, leading to a fundamentally synchronous (from a user's
perspective) transaction.

------
flibble
I couldn't agree more. For connected web based games this is a requirement.
<https://www.switchpoker.com/client> makes use of asynch calls to give the
appearance of an extremely responsive UI.

------
dearmash
Nice to see people testing out <script> tags in the demo. Also glad to see the
tags instead of being navigated away from the page.

Surprised a little to see the demo is actually being edited by multiple
people, presumably from yc.

------
radicalbyte
I did a bit of work towards this last year: the user experience is really
nice, comparable to Silverlight or Flex. Only both Silverlight and Flex have a
much nicer development experience at the cost of a plugin.

------
gurraman
I prefer to put the worker queue on the server. It's not as snappy, but it's
snappy enough. And queued up operations will not get lost if the browser is
closed/crashes.

------
gizzlon
I wonder if people have thought through the security problems and implications
of moving state to the client side..?

Actually, that's a lie, I'm sure they have not ;)

------
smackfu
Does anyone know why the Show links are actually faster for me on the async
version than the static? Shouldn't a "show" be instant either way?

------
wahnfrieden
This demo needs to listen to hash-change events so that it goes back when I
hit back in my browser. It's otherwise a good example.

------
outside1234
what are people using on the backend for apps like this? I was just starting
an app with approach like this (both for these reasons and to harmonize across
web and mobile clients) and I was planning on using RoR given its first class
support for JSON and maturity. Thoughts?

~~~
dgeb
Rails and Node are both good options for the backend. I found Alex's
screencast on integrating Spine and Rails with the spine-rails gem to be a
good introduction:

<http://vimeo.com/30976192>

------
james33
Am I the only one that finds it odd that Spine doesn't do this with their own
site?

~~~
phzbOx
Well the website is mostly static; that'd be overkill. Maybe the author wanted
to use another library to create the documentation automatically. But, you've
got a point saying that if it's not your first choice for simple static pages,
it's a bit scary to use it for huge production website. I.e. Django might be
overkill for a simple static page, but it's still trivial to use it for that.

------
zachallia
it's amazing how such a small slice of time can have such huge impact.
definitely excited by this and other approaches to increase perceived speed!

------
nickand
If I am loading a list I want to know when it is done.

------
HnNoPassMailer
TL; DR:

    
    
      The idea is that you update the client before you send an Ajax request to the server.
    

"Optimistic updating", not "Asynchronous UI". The UI is already asynchronous
(regardless of UI updating order).

    
    
      "request/response model". 
    

"Pessimistic updating" -> i.e. update UI only _after_ successful response

~~~
secoif
Agreed. Optimistic/pessimistic UI communicates the concept far much more
clearly.

------
maximusprime
Ajax is so 2005. At least use Comet, Websocket or SPDY where available.

