Hacker News new | comments | show | ask | jobs | submit login
The Web After Tomorrow (tonsky.me)
31 points by frankiesardo on June 23, 2015 | hide | past | web | favorite | 9 comments



I'm always sad to see that CouchDB and its PouchDB counterpart in the browser (or even on the server) are always forgotten, even though they've been solving that problem for years now:

* easy replication (protocol is documented (http://docs.couchdb.org/en/latest/replication/protocol.html), and it's only HTTP+JSON)

* The replication is reactive, ie you only get the changes since last time you synced

* The replication is realtime, it can use longpolling or server-sent events.

* The replication is two-way; the browser doesn't have a special status in the replication protocol. It is effectively the same database. The application speaks only with the local database, which is sync'ed in the background.

* CouchDB effectively acts as a log of all transformations. If you modify a document, there will be a trace that there is a "marker" that tells you that the document changed since last time you saw it.

* There is filtering, although it can be inefficient and is completely controlled by the client (so no security filtering from the server)

* CouchDB does lack granularity, it is only at the database level meaning that each user must have a different database. Couchbase is going another way with channels in the Sync gateway (http://developer.couchbase.com/mobile/develop/guides/sync-ga...) but it's non standard

We need to iterate on the CouchDB ecosystem, because it already provides a lot of what we need.


Server-side ACL filtering without affecting CouchDB native API can be achieved using https://github.com/ermouth/covercouch


As I understand it, this is what you’re talking about: an iteration on top of CouchDB: http://cloudwall.me


Provocative article indeed. But some of the visions seem but naive to me:

Running exactly the same validation twice wouldn’t make data more valid.

What in a Web app scenario guarantees, that the validations are the same, that a potential attacker haven't removed the validation from client code? By embedding the rules inside database you just made it a proxy to the raw data - making it the same old, wrong architecture.

Network failures... should not undermine our consistency guarantees.

Offline... I should be able to do local modifications, then merge changes when I get back online.

So far it is proven, that to get any consistency guarantees in such case, you are severely limited in kinds of data you can process. No library can magically solve the conflicts for you as long as the data is not CRDT. There are not many applications consisting exclusively of sets and counters.


Now consider following: - There are mobile apps, which do not talk directly to database. - There are online games, that do not talk directly to database. - There are Cloud based apps on your computer, that do not talk directly to database.

So why Web apps should talk directly to database?


"No, eventually DB will talk directly to the browser."

Essentially, this is what REST is -- a ("NoSQL") database over the HTTP protocol.


Yes, and some databases have REST APIs. In practice pure REST is usually blurred with RPC calls and optimisations (partial fetch, batch fetch). And the downside of the REST is that it’s not agile (you have to support both new and old clients explicitly), and manual: you have to write it manually for every endpoint. Good and efficient data fetch API for client-based rendering will very soon move very very far from REST.


In which direction do you mean? Also, I'm not sure what do you mean by "you have to write it manually", nor why do you have to support new and all clients explicitly. Care to elaborate?


I read this and thought "Lotus Notes, the early days"




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: