I am glad you wrote about this, but I feel you are missing the real 'magic' that can be done with node.js and Backbone, or any client code you choose. Socket.io is great, fantastic even, and I think it is something that really shows the power of node.js; there is something built on top of socket.io that is even more wonderful, and it is called dnode. It is really simple to use, but has a lot of potential for server process to server process and server to client communication. It is in effect RPC, and I really can't do it justice.
For the example you give up here, you could easily overwrite the Backbone.sync function and have it instead send the appropriate objects you need to fetch models on the server, while having the server call back to the client with said models. Not only that but you could easily remotely invoke the model changed functions in Backbone directly from the server. There is so much more you can do, but I lack the knowledge at this point to put it in play just yet.
: https://github.com/substack/dnode & http://substack.net/posts/85e1bd/DNode-Asynchronous-Remote-M...
Granted this is only the first nugget, but the optimist in me wants to see this approach evolve into a framework that truly spans server and client, instead of covering only half of the story.
Are there ever problems with one side referencing objects that no longer exist on the other side?
My project simply proxies client fetch/save/destroy calls via Backbone.sync to the server. Through a RESTful URL convention and Express.js the model and collection of the client's request are recreated, any transformations applied and persisted, then sent back down the pipe. Server-side, models and collections extend their client-side bases and introduce additional functionality (i.e. per-user access control). Models and collections are not kept in memory server-side beyond the duration of a request.
One problem I immediately see in this hybrid approach is the requirement to keep the state for all [active] users in memory on the server. This does not scale well - which is why conventional web development tries to be as stateless as possible.
It's not hard to remedy this partly by adding basic lazy-fetching and cursor capabilities to Collection. But you lose some of the most interesting features in the process (push-notify on changes), and preserving those at the same time becomes much more tricky then (bordering on classical ORM problems).
There's also still ground-level stuff missing in Backbone, i.e. the support for recursive updates of nested collections (e.g. from one big, nested json blob) is just not there yet. A few approaches have been discussed, but afaik not implemented yet. The models need to be beefed up with some sort of schema (e.g. json-schema) to truly solve this one.
However, none of that is insurmountable. All it takes is some more of the smartness that the Backbone developers have already demonstrated. And elbow grease.
For example I'd love to see backbone paired with 'dnode' or a similar RPC approach. That might open amazing possibilities.
Regarding the scaling issue. Yes. This is definitely a weakness (I actually mention it as a weakness in the last paragraph or so). We're working on a solution for that using redis as a sort of shared memory. But it's just not ready for primetime and I just figured I'd write up what I had going so far.
Regarding recursive updates of nested collections, the "mport" function in the post works well for re-inflating the entire state. But not quite as well for updates, but for this type of thing, I prefer piecemeal updates anyway, so we don't have to send as much data each time.
I'm definitely gonna check out dnode some more as well. Cheers.
And thanks for that!
In hindsight my first reply came out more nitpicky than intended; all my concerns are aimed at (still to be fixed) shortcomings in the backbone/node ecosystem, not at your particular implementation.
Major kudos for not only coming up with (at the very least) an intriguing Proof Of Concept but then also writing a great blog post to describe it. This will be learning material for myself and many others.
It could give you the best of both worlds - an initial request returns a quick, fully-formed HTML page, so it's speedy on the first load and crawlers will know what to do with it. Later requests use AJAX, so things are responsive and the server isn't weighed down with HTML rendering.
Think message passing between client objects (in js) and server objects (in scala). You can do jqueryish things server side to the client side pages. (That blew me away when I saw it in action.) Your submit even can pass params to the server object. It's really simple and strait forward. I'm looking for lift for NodeJS.
Check out the ubiquitous "chat server" example in the simply lift book. http://simply.liftweb.net/
Now, off the top of my head I can't think of a particular way that this could screw you. But I can't say that it looks like the Right Thing to do either.
I don't know how far the Asana team got with Luna, but I think their approach for a unified client-server model is much closer to how I'd want things to work: http://asana.com/luna/
Moreover a function does not even need to resolve to the same code on the client and the server. For example a model-validation function could evaluate to the actual validation logic on the server (incl. DB lookups etc.) and to a mere ajax RPC (calling that function on the server) on the client.
I could very well see a smart node serverside-library that let's you write one js file and then transparently mangles it as necessary before sending it to the client. The trick is to keep the magic down to a sane amount (slippery slope!) so that the end-result remains always predictable.
Yes, it seems luna is basically taking this approach.
Lots of room for experimentation and exploration in this space - exciting times ahead.
In the end I ended up redoing the project as a CLI app in Haskell.