I’m a big proponent of being able to run the code locally, but this seems to be more along the lines of, first update some local DB, then sync it.
I see nothing fundamentally different here than using a well designed API for ‘sync’, and none of it does anything to revive the web browsing experience. Gone are the days where “page load” meant that you could safely begin reading without interruption. This whole “web application” thing was a foobar’d mess. Yes, we probably want some level of interactivity, but I fail to see how casting the web application problem as a database sync issue addresses the fundamental issues.
That being said, I can see the appeal. It’s a clean interface to work with and it allows for new frameworks to displace existing problematic frameworks.
I just worry that in 5 years we’ll all be complaining about poorly designed schema and overloaded table updates again. The solution? Microservices with their own smaller tables which batch updates to the larger system of course!
It's local first because you write your data locally, first, then you sync it.
As opposed to writing it remotely first.
Also how many well designed APIs for sync really exist in the wild? I feel like you need a decent knowledge of distributed systems theory to even understand the problem space. "Last write wins" is a cop out and just means you are throwing away data semi-arbitrarily.
Exactly. A great working example is Obsidian. Every version of it - desktop, android, iOS, writes locally first and then syncs to remote (if you’re subscribed to the Sync service, otherwise just writes locally). Other than Sync (and some other optional services like Publish), it is 100% functional without network connectivity.
Call me old fashioned, but why would anybody ever think doing it differently was a good idea?
Having it run locally and only sending requests when really needed is a way to make your app feel reliable and snappy. If every ailly button of your app sends out requests that means the snappiness of your whole UI will depend on the mercy of the network gods. Maybe it is the thrill of the gamble?
> Call me old fashioned, but why would anybody ever think doing it differently was a good idea?
Most would probably agree, but then you're adding a layer of complexity on top of what could be a very simple client/server application that satisfies 80% of your users. There is more "cheap" software out in the world than robust, complex software because most software doesn't make it far from MVP.
I don’t think there’s any one reason. In the early web days there were no client-side data stores, just server middleware and database that did all the work, generated and updated web pages with a little bit of JavaScript on them.
It wasn’t until Web2.0 and iPhone that web and mobile began to have the ability to do the write local then sync model. But the old client-server model was still easier to develop for, had the momentum, and internet speeds were fast enough.
Nowadays there are services like Firebase that make it super fast and easy to develop full-featured apps. They provide everything an app needs - user accounts, sessions, data store, etc. All of which is still more work to replicate in the local first model. Afaik there’s nothing like Firebase for local-first.
> Call me old fashioned, but why would anybody ever think doing it differently was a good idea?
Simplicity. Ignoring the issues of broken conectivity, going straight to the server to write and reading back from it, removes the need for both store/retreive and sync - you only have to think about store/retreive. There are unavoidable sync issues (two users load the same object, edit for a while, then save) that you have to deal with but they can be more simply dealt with because you can assume a small number rather than each user potentially having been offline for days and written a novel over many updates to an overlapping set of documents.
Also historically people just haven't had good offline options and many still don't. If you have to contend with users on more ancient browsers/devices (which many of us unfortunately do) then you have to support online working too so if you want to priorities that needs to come first (it supports all possible users) and offline after (as an improvement for those that have devices to support the tech and storage space your solution needs).
Locking issues, while not easy in an online client/server setup, are simpler than offline-first. And by simpler I mean actually possible to deal with - with offline first you instead have to deal with conflict resolution.
Local-first / desktop apps were really easy back in early 2000's and before since users pretty much only had a single device.
Today, users have many devices with many different storage and compute constraints. They also expect their data to be available on all devices and, to top it off, be able to invite outside collaborators.
Handling this heterogenous landscape of devices and collaboration is much simpler in a cloud model. Trying to put more data and compute locally suddenly means worrying about a multitude of device profiles and multi-way sync.
That surely.. depends. I agree that the syncing requirements can make a centralized cloud architecture benefitial.
But there are many services where this is not the case, where a local first architecture would be benefitial.
If the snappiness of your application is of any priority you will have to do a lot of local caching anyways. I don't say local first is always the right approach (it isn't), but that there are many remote first apps out there which would be better if they had been done local first (e.g. because the state they are syncing is trivially simple).
I really like this way of thinking about it. I try to advocate for internetless development, so it meshes nicely with that.
It also helps me understand the scope of the frameworks. They need to maintain local state akin to saving files locally.
It’s just funny to me how we had this as the default mode of operation many years ago, then lost it because everything went to the cloud, and now we’re realizing that was kinda a mistake and we’re trying to fix it in the new environment.
The reasons why everything left an "internetless model" seem to be getting lost to time but I explained some reasoning here in this comment: https://news.ycombinator.com/item?id=37500449
Local first development is a misnomer. It should be local first persistence. All it does it trade-off the complexity of API management against synchronization of distributed persistence.
I’m a big proponent of being able to run the code locally, but this seems to be more along the lines of, first update some local DB, then sync it.
I see nothing fundamentally different here than using a well designed API for ‘sync’, and none of it does anything to revive the web browsing experience. Gone are the days where “page load” meant that you could safely begin reading without interruption. This whole “web application” thing was a foobar’d mess. Yes, we probably want some level of interactivity, but I fail to see how casting the web application problem as a database sync issue addresses the fundamental issues.
That being said, I can see the appeal. It’s a clean interface to work with and it allows for new frameworks to displace existing problematic frameworks.
I just worry that in 5 years we’ll all be complaining about poorly designed schema and overloaded table updates again. The solution? Microservices with their own smaller tables which batch updates to the larger system of course!