Hacker News new | past | comments | ask | show | jobs | submit login

What is the state of the art here in terms of reliable data persistence?

Last I was seriously looking at the PWA route, I understood that they were best suited for apps with always-online access because anything stored locally could be more or less jettisoned as soon as the device felt disk pressure. Is that still the case or did I dream that up?




We are very close to having WASM SQLite with persistence in the web platform. Until now SQLite compiled to WASM was in memory and you had to write the whole database out as a binary array to save changes. There is absurd-sql (https://github.com/jlongster/absurd-sql), which builds a virtual file system on top of IndexedDB for sqlite, it's incredible, but a bit of an ugly hack.

However, the new File System Access apis (https://developer.mozilla.org/en-US/docs/Web/API/File_System...) that are landing in browsers will fix this. One of the things it does is enable very efficient block level read/write access to a privet sandboxed filesystem for the websites origin, perfect for persistent sqlite. There is more here: https://web.dev/file-system-access/#accessing-files-optimize...


So the Web becomes an app store and the web browser a VM to run apps.


A reality since browsers support plugins, what we are getting is the reinvention of what they already supported, with the difference of being native browser APIs.


I have written a few backendless PWAs that rely solely on localStorage to store user data, and it's never been purged. I think Safari might be more aggressive about it, though I've no idea since I don't have an iPhone.

For persistence across browsers/devices I provide the possibility to sync data over Dropbox and Google drive.


Safari (on iOS, at least) deletes localStorage if the site/app hasn’t been used in some number of days (10?).

I lost my Wordle stats thanks to that feature :(


It's in controlled android devices but 10% of our sales force application operate almost offline.

In the morning, it syncs pouchdb with couchdb server, cache all the pages then goes to offline first mode.

Anytime they get internet signal, the serviceworker would send the data to postgresql.

Pouchdb takes care of sync and all the multi browser storage problem.


I love PouchDB, it's incredible, however I fear its a project that is loosing it momentum (I do think it has pick up a little over the last year though).

It has a very aggressive stale bot closing issues (this search shows 700 closed stale issues https://github.com/pouchdb/pouchdb/issues?q=is%3Aissue+stale...), some which I really don't think it should have. It gives the impression of a very active but stable platform that I don't necessarily think is accurate.

For example I found a version hash collision bug while working on a side project, the issue was closed as stale (https://github.com/pouchdb/pouchdb/issues/8257)


I agree with the stale bot issue and with PouchDB maintenance activity being lower than it should.

But the core features (store JSONs locally, sync them up with CouchDB) have been stable and reliable for many, many years and just work.


It's generally still the case. I have two workarounds. Allow replication from the cloud in the event that it is cleared out. And provide a mechanism for the user to manually backup and restore.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: