But for sending an e-mail? Not in a million years. I want to see the spinner, and then know that it was actually sent, so I can close my web browser.
E-mails can sometimes be terribly important things.
If my e-mail web app always instantaneously tells me "sent!", then I never have any idea if it actually was -- how long do I have to wait to know before it tells me, "sorry, not sent after all." What if the app doesn't get back an error code, but the connection times out? What if the app doesn't implement a timeout?
Basically, if I don't get a real, delayed, "sent" confirmation, then I know there was a problem and can investigate or try again. But if I get an instantaneously "sent" confirmation, and then don't get a "sorry, there was a sending error" message, I can't be 100% confident that the data actually got to the server, because maybe there was a problem with triggering the error message. And since I'm a web developer, I can imagine all SORTS of scenarios that a programmer might not account for, then would prevent an error message from being displayed.
as soon as you send a message, it goes into a little list on the side of your screen of things that are transferring to Google's servers. You can see it there, and you will see it go away when it has been transferred, so you know what's going on. But in the mean time, you can go back to your inbox, look at other emails, or do whatever else you want. That's how an asynchronous interface should be done.
One thing I didn't notice the article mentioning is that it's possible to have blocking only for certain parts of an interface. So if you press a "load picture" button, then maybe a gray square with a spinner will appear, but the rest of the interface should continue working as usual.
The default "Sending..." notation blocks you on the same page, doesn't it?
It doesn't block the UI and yet it still gives the user an indication when the actions are in progress/completed.
So instead of blocking and showing the "Sending..." message, it redirects you to the main inbox and shows a "Sending in background..." message, until the message has been sent. Of course, Gmail is so fast for me that usually I'm barely back at the inbox before the message is finished! :)
While your point about the server-end being a queue is true, there's an expectation once your message is offloaded onto Google's queue, they will reliably process the message in a reasonable amount of time.
So the idea is that instead of time-consuming, blocking operations, you have fast blocking operations that put things in queues, and the queues then handle the slow operations.
It ended up turning what could be a shared-nothing transaction (like every other ISP) into a network-wide two-phase commit requiring (at the time) millions of dollars of fault-tolerant hardware.
Meanwhile, guess what? You have no idea if that mail is queued at your ISP because the destination is down on a volume with a writeback cache on a RAID drive with a dead battery. You can never be 100% confident unless you have delivery status notifications, and those are pretty much dead these days.
At first, all mail was local, and so naturally was stored in a database, like most non-Internet mail servers of the day. It was what Dave Crocker called "rock mail" - I stick a message under a rock, and you know to look under that rock for your message.
The bigger we got, the harder that was, in the days where horizontal scaling by moving I/O to another machine was just as expensive (because the network was even slower than the disk). But we were sure that our distinction was important, and Internet-style queued mail was widely considered flaky (due in no small part to our own poor Internet delivery, no doubt). So we kept it, to the point of storing user mailboxes on Tandem NonStop machines that did multi-site replication with SQL implemented at the drive controller level.
Many of our scaling challenges were due to decisions that made sense early on and that we consciously refused to ditch; I wrote a bunch up here:
In Google Labs, there's a feature called "background send." I love it. It shows "sending in the background," allowing you to go do other things. If you try to close the tab/browser, it warns you the same way it warns you when sending normally.
But you already have no idea if it actually arrives, let alone at the right mailbox, let alone was read/processed - until you get an asynchronous response.
And even if a web app blocks on 'sending mail', you can still suffer timeouts and disconnects, meaning the case remains of occasionally having to refresh and manually verify an action truly went through.
So if you're not concerned about those things, why be concerned about blocking on 'sent'? And if you are concerned about those things, how does non-blocking materially increase your concern? At least an asynchronous UI could automatically follow-up on a client-side send command that was never acknowledged.
Don't get me wrong; I can see an argument for operations that you truly do want to block all activity on until you receive a pass/fail. Email just doesn't strike me as a particularly good example of that.
- I could quit my browser
- I could close my laptop
- If my train goes into a tunnel, I could lose my internet connection
…and I might never find out that the email didn’t make it, because my mail provider might never have found out that I was trying to send one. I need to know when the message is safely out of my hands.
I understand that for some messages, sure, you want an acknowledgement. I just don't see how that process is notably different for an async web client compared to what's already out there on desks and phones and particularly as compared to a blocking web UI.
If you had a blocking UI, any of those 'interruption' events could occur while you're staring at a spinner. And you (rightly) wouldn't know or feel confident that the message was sent until you re-established your connection and verified the item had made it to your sent items.
Which is the same as it would be with an async client: it's an important email, so until you saw it in the Sent Items folder, you wouldn't have the warm-and-fuzzy feeling.
If your email never got sent to the server in the first place why could the website not use local storage to determine an error or not? Gmail uses constant POSTing with drafts here to solve that issue (not sure if it uses local storage).
I don't see why this UI couldn't show you a notification if Send was pressed and communication to the server wasn't successful. Especially since it is doing so much client side work already.
Bottom line: you can have a non-blocking UI while still communicating to the user when there are problems.
Your sentence makes rafts of assumptions.
Later, if there is an issue with sending your e-mail it alerts you inside your browser. If you're unreachable through your web browser it sends you a text. What's wrong with that?
Unfortunately, pretending the network isn't there doesn't make it so. The flakiness has to come out somewhere, sometime. Either you make the user wait now, or you explain later, after you've lied about what you did. It's a tricky tradeoff.
Let's fast-forward to the end of the movie: You'll end up with a zillion special cases that are impossible to test properly. You'll decide to restore sanity by replicating the data into a client-side store with low latency and high reliability, so you can go back to a synchronous UI that your developers can reason about. All the craziness will be in a background process that syncs the client and server stores, which will still have to cause weird behavior as reality demands it, but at least the logic is contained. (I just described an IMAP mail client, or--for a Normandy-invasion-scale example--some versions of Microsoft Outlook.)
Then a new thin client platform comes along where you can't do all that complicated client-side stuff. The cycle repeats.
There are significant costs for real world apps to what the OP is suggesting, and you can't abstract them away in a framework or library, as much as you may wish it to be the case.
See also: http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Comput...
Call it second system syndrome, call it ivory tower complexity or architecture astronauts.
Some of us used to think that the overly complex solution only exist in Enterprise environment yet it happened everywhere a developer exist.
The reason why Backbone provides a persistent client id for the duration of every application session is so that if you need to reference model ids in your generated HTML, you always have something to hang your hat on. If I have '<article data-cid="c530">' ... I can always look up that article, regardless of if the Ajax request to create it on the server has finished or not. With Spine's approach: '<article data-id="D6FD9261-A603-43F7-A1B2-5879E8C7926B">' ... I'm not sure if that id is a real one, or if it's temporary, and can't be used to communicate with the server.
Optimistically (asynchronously, in Alex's terms) doing client-side model logic is tricky enough in the first place, without having to worry about creating an association based off a model's temporary id. I think that having a clear line between a client-only ID and the model's canonical ID is a nice distinction to have.
From the server perspective, it stores the real ID and the CID given by the client. In the extremely rare case where CID could be duplicated, the server could send back an error with an unique CID and the client would update itself.
That way, we're sure the ID is always unique on the server, but the client simply use a cid. In fact, the 'cid' could even be abstracted away by calling it ID. I.e. the server has 2 ids, the server one and the client one.. the client doesn't need to know the server-side one.
But in many apps, your server-side IDs are auto-incrementing MySQL or Postgres ids, or even a Flickr-style ticketing ID server. You really don't want your DB to be worrying about what are essentially transient JS/HTML references.
That's exactly the catch.
If you still need to consider CID collisions and write code to handle a potentially-different canonical server ID, there's no conceptual or complexity savings. You're not doing 'only' a CID; you're doing all the same work.
And, personally, it still 'smells' to me. I know that's not an objective argument, but there it is. It feels like a particularly leaky abstraction that will end up causing more trouble than it spares.
edit: clarified sarcasm that looked ambiguous on second glance.
What happens with Spine, though, is that those IDs are used in a URL.
So in the example I get the URL for the newly created page: http://spine-rails3.herokuapp.com/#/pages/34B3CC88-32CC-4D08...
Who of you can actually access it?
That isn't right. The GUIDs should be internal only.
This leads us to the complexity of the "Async" UI in general.
In order to redirect (navigate) user to a proper URL you DO need to wait for the response from the server.
But the real async UI should just allow user to do something else. It doesn't mean it should not WAIT for the server response.
That's assuming I'm reading the article correctly.
The whole toolchain for reasoning about stuff happening in the browser is still laging a few years behind what we have on the server, which is a related but larger problem. We have Firebug, which gets us truly revolutionary features like "output log messages... in a browser!" and "inspect the internal state of objects in memory... in a browser!" But many of the rest of the cutting edge developments from the 60s and 70s haven't quite made it to the browser yet, or they're not yet at the point where they can be used by mortals. (Selenium: I want to love you, and yet I can't actually use you for anything because you break my brain.)
Rails is a glue of a bunch of things. It's not like it doesn't exist in other environment.
Meanwhile, the state of JS testing is still far behind. Like really really far behind.
Once you get that sorted out, you still left with common set of libraries. Still a long way to go really.
Not to mention that the whole client-side thing is ripe for a change (depending on whom you talk to).
A few CI tools don't work well with the "requires browser to run test" paradigm.
I understand that means there might be class of errors that the no-browser test would not be able to catch but it's still a huge win.
Let's say your AJAX requests have a .1% chance of failure. If your users perform a thousand actions each on average, then 50% of your users will have been exposed to your error flow. Hope it's better than "Sorry, an error occurred."
Individual errors are rare compared to successes. Overall errors happen all the time.
Amazon: 100 ms of extra load time caused a 1% drop in sales (source: Greg Linden, Amazon).
Google: 500 ms of extra load time caused 20% fewer searches (source: Marrissa Mayer, Google).
Yahoo!: 400 ms of extra load time caused a 5-9% increase in the number of people who clicked "back" before the page even loaded (source: Nicole Sullivan, Yahoo!).
edit: Original source for Amazon stat (possibly also Yahoo, or possibly it's just referenced) is a powerpoint by Greg, downloadable at http://7303294208304035815-a-1802744773732722657-s-sites.goo...
edit 2: The Google stat is from a speech at a 2006 "Web 2.0" conference, referenced by Greg at http://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20....
edit 3: Yahoo stat from Nicole's presentation here: http://www.slideshare.net/stubbornella/designing-fast-websit...
For example, if an item is updated and the backend balks, but in 10 minutes, there is no clear and concise way to indicate this error to the user unless the item was marked as "in-progress". If the backend is normally snappy, then it might make sense to delay showing the in-progress indicator (so that the majority of users won't ever see it), but discarding it altogether is not a way to go.
Another example, say there is a list of items keyed by a name. I delete A, then rename B into A, and then the deletion of A fails. Ain't that a pretty mess to shovel yourselves out of?
That's not to say that there aren't certain UIs that could be made to work in "instant" fashion, but realistically there's just not very many of them there.
There's no need to block the UI. It is perfectly sufficient to disable just the affected item.
> ... creating responsive applications should not be at the expense of the user's understanding of what is happening.
In this case you are bound to repeat the Microsoft's Distributed COM fiasco. They tried to blur the line between accessing in-process, in-machine and over-the-network services behind an abstract API. It was nice in theory, but practically it was a disaster. It is really hard to write a meaningful app - even an asynchronous one - when an API call can take between few ms and several seconds to complete.
In case if the parallel is not clear - their idea was the same as yours - "devs need not to know what's happening". This does not work. Devs need to know, as do users in your case. Perception is indeed an important part of the UX, no arguing there, but the UI needs to be designed in a way to preclude them from making false assumptions that would prove frustrating and disastrous should the backend go kaput. Faking snappiness does the opposite, it makes believe.
I could not have thought of a worse example than removing a progress indicator from sending an email. Making an "async UI" work in a fluid way that provides confidence to the end user is much harder than simply changing the state immediately and hoping that 99% of the time, it works.
Error handing can be a pleasant experience if done correctly, and in this blog post it's just an afterthought.
Here's a better way to do it:
- I click "Send Mail"
- My UI changes as if it were sent, allowing me to do other things in the meantime.
- I receive a growl notification in some other part of the UI that tells me the email has been successfully sent.
- If 1 second has gone by and I did not receive a response from the server to confirm that the mail has been sent, I will see an indicator that tells me that the sending is in progress, where the growl indicator would have been.
- If it is an error, the indicator changes and allows me to click it to go back to the mail composition view.
The concept of providing perceived performance is not new, but the details are in the execution, and you will shoot your self in the foot if you don't cover all the little details that are required to make something like this work.
Otherwise, some company is going to implement some jarring async UI incorrectly and piss off a lot of users.
Yes, blocking a UI is bad, but notifying the user of progress and task completion is a very good thing.
When an average user completes an action and sees the results instantly, he's not wondering if something went wrong, or if something is ongoing. They've already had UI feedback suggesting a successful result.
It goes without saying that mechanisms in the back-end need to be implemented in order for AUI to provide a great user experience.
- On error, an action should be retried.
- Long-lived processes should be queued and upon failure, requeued.
It's not perfect, but I find that it is better. There's definitely work to be done on the error handling front. The UI should be able to clearly notify the user when something he previously did, didn't work.
Merely notifying the user, long after, that an error occurred and his work is not saved is hardly sufficient. Even assuming that the user doesn't leave your app, they could easily be off doing something else miles away from that operation.
And what about the second point; How useful is it to alert the user that something they thought was done wasn't long after they've stopped caring about it?
And on the second point, presumably you'd always show some kind of indication that the message is sending, just not one that blocks the rest of the UI.
This is how desktop mail clients I've used work. The sending indicator is small, but if I try to exit before it's finished sending, it blocks the exit and alerts me about it.
The real question is how would this apply to applications slightly more complicated. An app where operations have consequences beyond just the single item you are working on; were users are clicking "save" and not "send"; and users go from working on entirely different entities. It seems like this adds a lot of additional complexity for very little perceived performance gain.
In terms of the user experience, if the perf gains are small enough, then the chance that the user tries to exit while a request is in-progress should be slim, so interrupting their exit is fine as an edge case, and the gains in responsiveness should be weighed against your core user or business metrics - several hundred ms extra delay per action can have a significantly negative impact.
If the gains are large enough that the user is likely to interrupt something when exiting, then blocking the entire UI for each request is a terrible experience and you need to do something about it anyway.
I'm curious about what could be a better solution while maintaining that perceived speed.
And in the case of a big error, you can either refresh the page.. or clear the models and resend them in json format to stay sync with the server. It might be a little more complicated, but with a good framework, not that much more. I.e. multithreaded code is a pain.. asynchronous or not, it's a pain. And, in the rare case where you really need to wait for the server to answer back, well, use a loader. But there's a difference between using a loader when you absolutely need it, and everywhere. Think of how apps work on the desktop.. everything is lighting fast, but in some rare occasions, it blocks. What is better? Something that always block or something that sometime block?
I.e. The point is that asynchronous make it feels smoother. If the browser crashes while data is being transmitted, there's nothing you can do. Ajax, asynchronous or whatever. So, what will happens is the data will be lost. The asynchronous part doesn't resolve all problems.. it just feel faster for the user.
And, by the way, why the ":)" at the end? Is it because you were happy? Personally, I find that a bit provocative. (i.e. in gaming, people would say "You suck :)" or if they'd crush you, they would just say :). It's being bad manner.) But then, if you were happy and just wanted to show it, sorry for this comment.
But I find that you often wind up having to write code to deal with that anyway, to handle cases of inadvertent user errors.
(e.g. I reserved a flight, room and rental car in Kansas City, KS -- but I was supposed to be reserving a flight, room and rental car in Kansas City, MO.)
So while I'm intimately familiar with and sympathetic to the challenge and complexity involved; I don't know that it's additional challenge or complexity.
In 2 of the example studies given (Amazon and Yahoo!), we're talking about content/commerce sites where rankings matter.
If you reduce load time by Xms and increase conversions by Y%, your net gain could still be negative if you get bumped to page 3 for important searches and lose traffic.
Do any of these JS frameworks consider SEO and have appropriate features built-in? (I'm thinking of things like hash fragments)
Can someone who runs a content/commerce site that cares about SEO comment on this?
I settled on building a JS-free version of the website using the templating system I've developed for the backend, and then loading in JS at the end of page load which replaces and rebuilds the site into an interactive UI for users with JS enabled.
Assuming Google doesn't try too hard to execute JS on the page, it should get a clean, "normal" version of the site, with all text & menus and everything else accessible, while users get something a little bit different (but with the same content).
That said, work has been done to ameliorate the problems and reduce the chance of data loss. Check out the "background send" lab released earlier this year : http://gmailblog.blogspot.com/2011/04/new-in-labs-background...
I really like this idea, but for some reason I think my brain would be more comfortable with a ajax spinner appearing for 300ms rather than an instant page load. For instance, I built something recently which loaded images on a page via ajax calls. It happened very quickly, 50ms maybe. The loading seemed way too fast so I actually delayed the images by about 300ms. It seemed a much more comfortable delay, and a few of my non-developer friends agreed.
Is there a sweet spot, or am I crazy? Let's just ignore amazon and google's data for the sake of argument.
1) optimization is fragility
2) the extremes will inform the average
The "most likely scenario" is a visitor with a fast-enough Internet connection that a few hundred ms more won't matter.
So we should build for the extremes? Well, that's a little extreme (see what I did there?). But if you point to stats like "5-9% hit the back button...", that is not the most likely scenario...it's, well, 5% to 9% of the scenarios...
There's a documentary called Objectified that examined this with physical products, check it out. I think when developing and/or designing for speed, the "most likely" person is the least of your worries. The people still rocking slow dial-up connections are the ones who will be impacted...design and develop with them in mind.
One example from Objectified was a toothbrush. When they targeted extremes and made a handle that musclebound roidheads, people with MS, and old people could easily use (i.e. the extremes of human mobility), the "average" consumer was more than taken aware of and the extremes were satisfied.
If you develop and design for the slow browsers and the wonky old Internet connections, or at least keep them in mind, the normal folks will be more than satisfied (ideally).
Sorry to be so picky it just caught my eye and I felt compelled to chime in whilst waiting for school to end...
"Now that the client is the MVC execution environment with the client-server interaction used mostly for asynchronous data replication, plus some extra invokable server-side behaviours, we can congratulate ourselves on having more-or-less reinvented Lotus Notes."
It solved all these problems mentioned and more - for example, it used the operational transform algorithm to merge your changes with those of other users on the same page and update the client state to reflect this asynchronously. It also could continue working without a network connection - it'd just keep queuing your requests, and when you plug in the network again, it'd just start working again, albeit possibly with a big backlog of changes to merge together.
These are the kinds of problems you might have to start thinking about if you want to go down this path. Remember that Google Wave died from its own complexity.
> Again, this is an exceptional event, so it's not worth investing too much developer time into.
I have to disagree here. Exceptional events are exceptionally important here, since so much progress is hidden from the user. It is absolutely critical to inform the user of what happened, so their expectations aren't broken, and to cleanly recover so the application is not in an incorrect state. I think this is the most important thing to invest developer time into in an application built in this way. Otherwise, you'll lose customer confidence do to unexpected behavior or even lost/corrupted data.
The speed argument also doesn't hold. If requests take too long to process then you have either problem with your API (doing something synchronously on server side which should be done asynchronously, granularity problems,...) or your server is freaking slow. At worst a request should take under 100ms of pure server time. Add latency and you have 300ms.
a sync problem on the server can't be worked around on the client side. You would end up introducing complexity in an unstable, unaffordable and insecure client.
also, actions like filling a page with data from a db do require the client to wait for the server to complete.
And, if we look carefully in the past, it seems that it always start with the "Easy for coder first" -------> "Easy for user". For example, when the first examples of Ajax came out, it was really hacky and most programmers would have never believed what they'd see today.
So, I think that you are half right with the "introducing complexity in an unstable, unaffordable and insecure client." Maybe with the actual technology and framework, you are right. But I'm certain that in the following months/years, we'll go toward the road of a better UI.
And, I still believe that it's not as hard as people think to make UI update first and update later. 99.9% of the time, the server returns "ok" or something we already knew. In the last 0.01%, we have to choose if we really want to make it to 100%.. but in these rare case, a hard refresh is perfectly fine.
No regular Joe knows a 'server state' and 'client state'.
Am I the only one who does not like such outlook?
This would remove the dreaded "blocked UI" scenario because everything appears to happen instantaneously, however there would be failsafes in place when something goes wrong (the infrequent scenario).
To me it seems more a matter of order reversal in how we handle AJAX calls (assuming you aren't using an async/evented system).
I can, however, think of downsides. Take, for instance, a scenario where you may have a nested tree of actionable items that may have prerequisites on the other's completion. You could chain the events, but you might end up with a queue unbeknownst to the end user. Worse, a failure might occur at the parent level which leads to failures for all subsequent calls. I myself am not sure what the good alternative to this might be in terms of non-blocking UIs.
If there is one thing I can truly appreciate about what he is trying to do with spine it's the client id generation and request queueing. This has got to be the core of what makes good "AUI". Every developer dealing with remote requests should have this in their back pocket. 101 stuff.
... but really, if you're building a large JS app, it doesn't matter particularly which library you pick -- the main benefit is simply to get your state out of the DOM and into rich, reusable models that make it easier to reason about and manipulate.
Here is the better answer: http://pastebin.com/rEZhXv1z
For instance, take gmail. Some part might hard to use that way.. for instance, chat communication where you somewhat need to receive the answer of the other person to show it. However, adding labels, deleting a message, etc.. that can all be made asynchronously.
Surprised a little to see the demo is actually being edited by multiple people, presumably from yc.
Actually, that's a lie, I'm sure they have not ;)
The idea is that you update the client before you send an Ajax request to the server.