Basically, the rendered state of the client is the acknowledged server state plus the client-side command queue.
User actions don't make a server request and then update the UI. Instead they directly append to the local command queue, which updates the UI state, and right away the client begins communicating with the server to make the local change real.
While the client's command queue is nonempty, the UI shows a spinner or equivalent. If commands cannot be realized because of network failure, the UI remains functional but with a clear warning that changes are waiting to be synchronized.
(The API for connectivity status are useful for making sure that the command queue resumes syncing when the user's internet comes back.)
CQRS and ES, are wonderful tools in such situations. Past that point, your web app is far away from the little CRUD mashup it was at the beginning.
Taking most of our inspiration from Git, it was simple. Define the atomic unit of conflict, and the definition of a conflict that cannot be automatically resolved, and simply help the end user understand and resolve the conflict.
95% of cases were fairly easy to merge without interaction, once we had defined "conflict." And the remaining 5% just required a little extra user experience design to help surface the appropriate resolution.
Yes, from this viewpoint, Git (and every similar version control system) is a hack. But it works well in practice because humans are running it, not machines.
Unless you're referring to something I'm utterly overlooking, I don't see how you'd get around the invalidated assumptions that came from being offline and having outdated information. Intent doesn't matter if the intentions were invalidated.
When assumptions change, you really have a management problem on your hands. There's more information (sometimes very complicated information, like "the back office changed his hours to at least 8 because of union contracts, so now he has 11") that must be communicated and decided upon, sometimes by more than one person (e.g. foreman and superintendent) to figure out the appropriate resolution.
In version control systems, this is the same, and unavoidable for asynchronous workflows. It's basically optimistic locking, really. You make changes hoping that nobody else has altered the data, then check to see if any of your assumptions (i.e. nobody else is making changes) have held. If they haven't held, then you need to recheck your assumptions and resolve the conflict; there's no way around that.
Suppose you change foo from 9 to 10. Was that because you now wanted foo to be 10 specifically, or because you wanted to increment foo and it happened to be 9 before so it becomes 10 now?
In isolation, these have the same effect. However, one is absolute and the other is relative, so if two of you happen to make that same change at the same time, your intentions matter very much to how your respective changes should be combined.
For example, if your code knows that both changes were intended to set new absolute values, it can automatically determine that the combined effect should also be 10. Similarly, if your code knows that both changes were intended to increment foo, it can automatically combine those effects to get 11. But if all it knows is that two people changed foo in concurrent updates that now need to be merged, that probably results in a conflict that requires manual resolution by a user.
This is very much just a distributed database where we've chosen availability in the presence of network partitions rather than consistency. The end result is that conflicts will inevitably happen, and aside from a hugely complex set of rules, the cheapest resolution is still human intervention.
It doesn't matter if the UI shows 5, and another mechanism has changed the value to 9 in the background, your intent to set the value to 8 is unaffected.
If, however, you assume the "from" clause and do a +3 instead of =8, you get a new invalid state 12.
Encoding intent implies declarative statements as apposed to impaitive statements.
Intent could be for example to insert this new item as a first child of that other one, or to move these items to a position right before some other item, or to set font size on this item to 4.
The data structure was a tree of item objects linked via next/prev/children/parent. Automatic conflict resolution works wonderfully in this case.
Things get ugly quickly for any more complicated scenario.
ES seems to be Event Sourcing – http://martinfowler.com/eaaDev/EventSourcing.html
In our case, a simple command queue solved the vast majority of use cases, and the corner cases were not disasters but at worst somewhat confusing.
Our biggest challenge was to sync data with relationships, especially data with circular relationships. We couldn't come up with a generic way to sync circular relationships, so we ended up building a very special purpose buffer on both client and server side.
That's an interesting pattern I've not heard of before - any links you could recommend for more details on it? It sounds really useful for moderate sized data sets.
There's a library called normalizr, whose purpose is to take nested API responses and turn them into flat structures. Have an article about it.
In my experience it made it trivial to handle the synchronization of arbitrarily complex object graphs (including those with circular dependencies).
By the time the client has synced with the server, the client could be doing something completely unrelated in an entirely different part of the app. Explaining to them that "the thing you were editing two hours ago has an issue, go here to resolve it" can be tricky.
You could popup a /failure/ message and leave a little indicator icon that takes them to the queue and lets them see the items that failed. If retry is an option they can see it, if retry is not an option they can be told why.
If you make sure to show the user a clear warning that they're working offline, then they might be more understanding if their changes are rejected two hours later.
In your command queue pattern, what sort of ways do you handle this? Have certain "chokepoints" where the user must be online to proceed? What if you have an app where that sort of chokepoint seems to occur too frequently to make the queue useful?
Edit: I think this is similar to Fiahil's sibling comment, also relevant points made there. Thanks for the quality thoughts everyone!
It may not be possible to make every app work while offline. You'll probably want to potentially disable certain features while offline, such as store purchases (or, at least, queue them up, but don't show them as having been successfully purchased)
In a situation where this kind of thing was more important, I would think about how to let the user decide how to reconciliate their changes. It could be that a choice of discarding or retrying would suffice, or something more complex.
The "chokepoint" notion is also useful, and in fact our app did have a distinction between potentially offline actions and necessarily synchronous actions, but our synchronous actions were mostly queries like searches.
The thing about React-like frameworks that makes it very nice is that you can keep the command queue in a separate place and have a root rendering function like this:
1. Set the view state to a copy of the actual state
2. Update the view state according to each queued command in turn
3. Render the view state including a status bubble showing that some changes are not saved yet
And separately from the rendering, you have a worker that tries (and retries) to perform the queued commands. When a command is successfully performed, it's removed from the queue and its effect on the state is saved in the actual state.
Since I can't find anything on Google when searching for "react command queue" it would be cool to write a blog post with a simple example, but I don't know when I'd have time, so I encourage anyone who's implemented a similar thing to go ahead.
Any suggestions on front end frameworks?
I don't know of any open source libraries for the command queue itself. If your state changing commands go through some kind of layer that you control then this stuff is easier. When I implemented it, I first refactored all the commands to go through the same code path, which I could then modify to implement the queue.
(This was also useful when our backend had random issues causing 500s sometimes.)
A command has both an AJAX request and a state updating function. It's really easy with a React-like framework because you can just apply the command queue's state changes as part of the main view render, without actually modifying the main state.
How do you deal with network failure in the middle of the promise array?
Serializing a promise to localStorage; I get what you're trying to do, you want to have a worker pick up exactly where it left off when you left the app. This is where a service worker would help you. I suppose you could write some kind of durable mailbox a la Akka, only running in the browser.
Also showing the state of the queue (Sims-style, or just a number of pending changes) is easier when you control the queue yourself.
In our case, we weren't using an event sourcing architecture, but this client side pattern can be used anyway.
There are four types of applications in that respect:
1. always disconnected (e.g. calculator)
2. occasionally connected (e.g. email client)
3. occasionally disconnected (e.g. messaging client)
4. always connected (e.g. trading terminal)
First architectural task I am doing when have given an application idea is its classification using these models.
Each particular case requires its own storage/caching approach. "Offline first" is too broad.
On the subject of progressive enhancement, I'm a huge advocate and believe it is in general the way todo content sites. As has been pointed out above though, some use cases do require a different approach. For context this was post was written after working on a number of HTML5 apps that were wrapped in Cordova/PhoneGap.
Unfortunately the web is missing a few pieces that would make it a very good platform for fully p2p, distributed apps. Service workers are a good start, but they have a 24-hour upper cap on max-age of the service worker itself, so users can't trust on first use (TOFU) and then be more secure against kinds of active targeting. The suborigin specification and iframe sandboxes for distributing apps offline and p2p would be much more useful for offline sandboxes if they didn't require that a server send an http header. These will become much more important as the web bluetooth API matures, which can allow distributing user data and application updates in a totally offline environment.
Even without being fully offline, it's very odd that when an automatic update to android or windows comes down the pipe, people in remote areas download the exact same bytes from servers in America over and over again, all over a thin pipe. They could fetch that data from each other and save a lot of money on bandwidth and data caps.
If you're really interested in learning about creating a workable offline web app, Google has some great documentation. https://developers.google.com/web/fundamentals/instant-and-o...
Last, the Safari team needs to seriously get to work on Service Workers. We will see web apps grow by leaps and bounds once the service worker spec is opened up to iPhone users.
Sounds exactly like what Apple doesn't want; no 30% revenue cut, no walled garden of control, ...
In any distributed system, the biggest cost is moving data between nodes, and therefore the biggest failure case is when data is moving slowly or not at all. It's a case you should always be prepared for.
If you write your app in a way that assumes the network is bad, which you should always do, whether it's an app or two microservices, then you'll have a more robust system.
And we've been doing those for like 50 years now. So why is this still so hard? Because new developer were, for all intents and purposes, born yesterday.
The question is, how good of an experience will you be able to deliver when your client is inevitably in one of these places?
I actually believe that >50% of web apps probably fall into this category, where they really cannot function properly offline because the online-ness s core to their functionality. That's why they are web apps in the first place.
Now if you are designing a web app that is a web version of a more traditional native app like Google Docs or something, then sure offline first makes sense. But I don't think that's the majority of web apps.
Since network links are always flaky, it just makes sense to do it this way. Since they are also always relatively slow, it makes sense to cache data locally in order to give a faster experience.
Not doing things offline-first in an app basically means that you are introducing synchronous requests everywhere: reading a support reply from yesterday is a synchronous request that fails "gracefully" if your 3G happens to be down, etc.
Telegram's web app is pretty nice. The app code is cached offline with a service worker and updated whenever possible, so it loads instantly. The most recent messages from your contacts are saved in the client as well. I appreciate all that stuff as a user, and the more stuff works offline the better, because it also means it's faster and more reliable.
For more info:
Some sites just wouldn't work well with HTML+optional JS. Google maps (IMHO) would not work well that way, so using an offline-oriented model would make better sense.
That being said, if your a news site, or a blog, yeah, a simple static page is probably a better solution.
The article is older but the advice is sound: only reach out to the server when you need to and ensure your client-side state doesn't break when you can't.
It's a little strange that they avoided naming any JS MV* frameworks even though some where out by then -- Backbone, Knockout, Ember, Angular, if I recall. But this article makes the point that all future JS MV* development went on to consider best practice in the years to follow.
- only slightly sarcastic.
I remember some early ecom cart implementations where you could "name your own price" as an unintended feature.
The list is actually quite extensive:
For mobile, clean separations also helps. It is definitely possible - and less complex than you'd expect - to have core functionality in a shared library, wired up to platform-specific native GUI toolkits.
But your question illustrates the problem. The pervasive presence of toolkits that add layer upon layer to create "cross platform" are the new norm. People are literally losing awareness that other options exist - much to the detriment of end users.
We have built an entire industry around such tooling - and long ago stopped questioning what value it brings.
The web comes with its warts, but I've yet to see an app platform as ergonomic and comfortable for the developer as the web. For 99% of my use cases, anything else is overkill and too much of a hassle. It's not the web's fault that it's a better app platform than actual app platforms.
Disable your web cache and try to use a web application daily. I wouldn't use software that constantly resets or removes my config files as a side effect of some other action.
That's like saying "disable a native applications persistent storage and use it daily." It is a meaningless comment.
Cookies/cache/localStorage works for most users. I am not most users and I recognize that. My criticism is that the primary method of persistent storage is fundamentally flawed and makes most web apps completely unusable for me.
I'm that person who carries a USB drive of portable software customized to my preferences primarily to be used on friends' machines or for setting up new machines. Setup once and use everywhere. Browser-based storage needs to be setup everywhere by design. I need to setup at Work and at Home because I refuse to tie my personal Home profile with my Work profile, so there is no "syncing" my profile across devices.
If you primarily use one device or can sync between devices and allow cookies/cache/localStorage to persist, then web apps won't be a problem for you at all. If any of the above doesn't apply - then web apps are a thorn in the side.
Oh and as Nadya said, sometimes this generalization causes issues. Engineering is a game of trade-offs I think :P.
That's the point. Most applications don't need speed, and most that need it can offload speed-critical code with no significant drawbacks.
The web is fast enough for 99% of use cases.
How's the update system work?
How quickly can you release them to all platforms?
Can they be easily customized and modified by the user?
Can they be easily shared?
What's the permissions model like? How much can that application access?
Will it still be around to use if the company goes bankrupt?
Can settings be backed up at all?
Can I even restore settings from a backup?
Can I easily share these settings?
Is there a way to carry it on a thumb drive so that in the event I find myself without internet for a first install/use - I can still use it?
How much control do I have over any data used/stored by the application?
But to keep this from turning ugly, my point was more that you need to take into consideration what you'll need for your app.
If it's an application that basically only exists as an interface for data stored in a backend server, then giving them the ability to exist after bankruptcy is pointless. However if it's something that needs root access and will frequently be used and installed on a system without internet access, native is better.
And saying things like a native app is "less bloated" when it takes literally multiple magnitudes more time to install and run with significantly more permissions to your whole system is silly.
But cherry picking questions to prove a point is silly. However, to show it's not a gangup on web apps here we go:
Speed of the install process? Probably slower than loading a web page for serious applications
How does the update system work? Depends if you're releasing as a single statically compiled program or using shared libs that can be updated. Also if you have a db to sync this will affect things.
How quickly can you release them to all platforms? As long as it takes to compile to all compatible targets.
Can they be easily customized and modified by the user? in what regard? if you mean configuration then yes. If you mean being able to manually tweak the style of the application like when fiddling around in the element inspector, then no unless you are using a theme parser that lets them adjust the themes.
Can they be easily shared? yes
What's the permission model like? Depends what granularity you want to have. Permissions can be restricted to the action level, user level, group level, machine level, global level, etc. Whatever logic you want to implement really.
How much can that application access? access in terms of what?
I hear a LOT about how writing an application for the web is wrong (especially on HN), but not much about why it's a good idea. I see comments about how native is faster, "less bloated", portable, "better designed", offline, and more secure. But never any comments on how long they take to install, how difficult it is to use them across multiple devices, how you need to either use an app-store, bundle your own updater (which follows all the best security practices), or rely on a distro to get around to including it for you. I never read discussions on how they tend to be larger, they have more access to the underlying system by default, how they are more difficult to secure, or how if you use the one application across multiple platforms you need to learn multiple UIs.
And while none of that is true across the board, it's stuff you need to spend more time on to get right, whereas you tend to get it "for free" when targeting the web. Obviously things go the other way for some features. Getting high performance out of a web app takes more work, getting "high security" to work in a browser is much more difficult, getting offline takes some consideration (IMO it's not that difficult today, but it does still take work).
I hear this excuse a lot, but there isn't a single benefit I've talked about which is for the developer only.
Install times are a big one. No user wants to install thing and manage dependencies or manually install updates. The sandboxing is another very pro-user thing as it makes sure my fuckups or mistakes can't easily cause their whole PC to be compromised, and they don't need to spend time making sure they have permissions setup correctly for my application on every device.
And for me, as a user, I greatly prefer web apps because I and many other people live a multi device life. If I have an Android phone, a Windows work PC, and a personal MacBook, I need to learn 3 different UIs for a single application. I need to configure them 3 times, manage their settings in 3 places. With a web app I learn 1 UI, I configure it once, I can login on my main PC or my father's Linux laptop and get the same app I'm used to in seconds.
No worrying about making backups for it, no worrying about the permissions I'm giving it, no worrying about the updates each machine is on, or how much space it might be taking up, or if it's using HTTP connections for updates, or that support for my older OS might get dropped, or that it won't hit my new distro for 6 months, or that it's not available in my package manager, or that it will autostart at boot and be an annoyance, or that uninstalling it will leave a bunch of shit behind, or any other of the things that native applications do that annoy me.
I go to a URL, and I use an app in less than a second on any device I own. And if I want, I can quickly go into the browser settings and wipe that app and everything it's touched from the PC in seconds.
That might be optimising for my wanted experience as a user, but I can't please everyone and I see a lot more multi-device multi-OS users who don't want to manage all the details of a native app than I ever do of users that want the opposite.
Especially on that first point. I can go to the vast majority of web apps on just about anything with a browser and get it up and running in less than a second knowing nothing more than a domain name.
And there's "cross platform" then there's "cross platform". Something like QT is amazing, but you are still looking at the big-3 desktop OS's, and maybe the big mobile guys if you work for it. A web app includes all of that, plus my TV, my car headunit, and even my damn watch! (I often use a web home-automation app from a browser on my watch, the UI adapts pretty damn well for quick light-flips)
Nothing is perfect for everyone, but just because it's been done since the 80's doesn't mean it can't be improved on. And as always it depends on your actual needs. There aren't any "better" and "worse" architectures.
Most web apps I've come across either don't run in IE or have various bugs/issues in Firefox as most of them are coded on and targeting Chrome due to Chrome's dominance of the web. Which reminds me of people building/testing only on Windows.
I will admit that the comparison I'm drawing are "same but different" problems. Browsers are a lot more standardized than operating systems and fixing a difference between Firefox/Chrome is usually a lot more trivial than fixing a difference between Windows/Mac.
And to be fair, a web app does get the job done, though I have to admit the number of companies that opt for an internal webapp instead of a desktop application is interesting.
Many things on the web should be standalone applications, others should be plain documents.
The web should be split into sandboxed programs and epub texts, and we’d be all better off.
On the desktop, the easiest way to run an app in a sandbox is if it's a web app that runs in the browser's sandbox to begin with.
The browser is fast becoming the "OS" and most browsers are several order of magnitudes more bloated than mainstream kernels, not to mention horrendously insecure - if we're worried about security, then browser vendors are the last people I would trust for anything important.
I feel pretty comfortable assuming that www.randomwebapp.com isn't reading and uploading my ~/.ssh and ~/.gpg, otherwise I'd be terrified of using the web at all.
That's partially true. By default, it cannot access any system files, or change any system settings without admin privileges. Admin access is also required to authorize a firewall exception if it wants to use the network. And you have the choice to arbitrarily restrict a software's read/write access to locations of your choosing. You might call that unusual, but such restrictions are common in managed environments.
>I feel pretty comfortable assuming that www.randomwebapp.com isn't reading and uploading my ~/.ssh and ~/.gpg, otherwise I'd be terrified of using the web at all.
Your comfort is misplaced. There are FAR more browser vulnerabilities (including chrome, firefox) allowing code execution than there are OS kernel and CPU vulnerabilities allowing you to break out of the native apps' sandbox.
I am too trusting of browser security though, you're right about that...
1) Virtual memory protection (can't access other app's memory)
2) Protection rings (safe transfer from UM to KM for system calls)
3) User interface isolation (one process can't interact with another's UI)
4) I/O privilege levels (prevents one rogue app from causing I/O starvation)
5) Process Integrity Levels. You can run apps under your own identity (be it super user or admin or regular user) but assign them reduced permissions as far as accessing data goes. You can run at-risk apps this way so that they can run without having access to any of your data.
6) You can restrict access to various other things in addition to the data using ACLs (network, device drivers, etc).
7) ABI level isolation using user mode kernels ("Library OSs").
A massive wall with an open gate isn't much of a wall.
It seems like a no-brainer to me, building a cross-platform application use web technologies makes the most sense from a business pov.
Compared to making a cross platform native application that works on Linux, Mac, Windows, Android, and iOS, making a web app—even with offline support—is delightful and efficient.
Plus, packing it into a desktop app is lightening fast. It is a "wow" factor for potential clients.
That being said, doesn't appcache provide the offline page if the browser can't connect?
Web servers exist so you don't have to have all that data locally.
Take the tripadvisor app; when you go to the website you are nagged to go to the app, if you click you end up in the google play store and it loses where you were.
And of course it only works online; utterly pointless.
I was talking about a real-time communication application.
You can make the argument that the application should try to cope with a sporadic connection, but in the real world this is inefficient for both the user and the staff if the connection is going in and out and they are trying to hold a conversation it might be better to show the user a message that says hey your connection is down try later when you have a better connection or try an asynchronous support method like submitting a ticket.
Regardless that was just an example off the top of my head. My point still stands; web applications are web applications for a reason, and "offline first" doesn't make sense for most of them.
We detached this subthread from https://news.ycombinator.com/item?id=13247090 and marked it off-topic.
If a couple of links to articles is "incredibly insulting" to you then you shouldn't discuss stuff on the internet.
I said "in the real world" as opposed to abstract talking about an application; in the real world a staff member would be stuck waiting for the response from this user where you're trying to keep the conversation going despite their sporadic connection. I guess I could have said "in meatspace" or something.
Links to explanations of the topic being discussed is of course incredibly insulting. Come on.
Anyway, whatever man. You are not pleasant to talk to so I'm done.
As for etiquette... I politely explained how I see things different from you and provided a couple of links for reference. You immediately called me "condescending", "incredibly insulting", and "pedantic".
If you want a pleasant conversation, that's not the way to do it...
Also as a nitpick, I would say don't make your data objects SINGLETONS, make them SINGLE INSTANCE.
EDIT: someone deleted their comment but brought up a good point that you can have progressive enhancement with this. Yes, however the author made it clear he favors the JS or nothing approach. And there's even this snippet:
>An offline first approach would be to move the entire MVC stack into client side code (aka our app) and to turn our server side component into a data only JSON API.
As for google, they also recommend using progressive enhancement and they even removed their ajax SE scheme posting about this wherein they even give a recommended trick for compatibility testing.