* Caching of shared static HTML for fast-as-possible page loads.
* Google indexing and searchability.
* Backbone apps that feel more like a series of pages and less like a single-page app.
... are all things that are addressed quite neatly by Rendr.
Jeremy, I think you're mistaking this for Keith Norman's SpainJS presentation (http://www.youtube.com/watch?v=jbn9c_yfuoM). He proposes the same approach, but I don't know if it ever got past a demo. Although it seems like they may be using some form of this at Groupon in production.
Anyway, it is exciting, isn't it? This is just the beginning for us -- we've had to make a few hacky design decisions to be able to ship, but I think we will get the kinks worked out. The trick, and the challenge, seems to lie in choosing the right set of abstractions to hide the code complexity from the application developer. I hope to open source it as soon as I can, to benefit from the input of luminaries such as yourself!
Oh yeah, and the offer to give a Tech Talk at Airbnb next time you're in SF still stands :)
How does the approach you've taken compare with the architecture outlined in nodejitsu's concept of isomorphic JS?
Y'all should get together and compare notes ;)
I was just talking today about how I am tempted to try some of the other libraries that are going this direction. But that whenever I look at their code I'm envious of how clean Backbone is. Seriously the biggest turn-off to Angular is reading the code and seeing that people aren't as nit-picky, pseudo-OCD, whatever you want to call it as you are. Just curious.
The basic idea is that for many public-facing websites (think NYTimes, natch, or Airbnb's search result listings, for example), the usual Rails convention of "Hey, here comes a request, let me generate a page just for you", is fairly inappropriate. Lots of "publishing" applications will melt very quickly if Rails ever ends up serving dynamic requests. Instead, you cache everything, either on disk with Nginx, in Memcached, or in Varnish.
But you know when the data is changing -- when an article has been updated and republished ... or when you've done another load of the government dataset that's powering your visualization. Waiting for a user request to come in and then caching your response to that (while hoping that the thundering herd doesn't knock you over first) is backwards, right?
I think it would be fun to play around with a Node-based framework that is based around this inverted publishing model, instead of the usual serving one. The default would be to bake out static resources when data changes, and you'd want to automatically track all of the data flows and dependencies within the application. So when your user submits a change, or your cron picks up new data from the FEC, or when your editor hits "publish", all of the bits that need to be updated get generated right then.
It's only a small step from there to pushing down the updates to Backbone models for active users ... but one step at a time, right? No need to couple those things together.
ps. Kudos to you for reading the source. It's always enlightening: https://github.com/angular/angular.js/blob/master/src/ng/roo...
>I think it would be fun to play around with a Node-based framework that is based around this inverted publishing model, instead of the usual serving one. The default would be to bake out static resources when data changes, and you'd want to automatically track all of the data flows and dependencies within the application. So when your user submits a change, or your cron picks up new data from the FEC, or when your editor hits "publish", all of the bits that need to be updated get generated right then.
You mean most things don't already do this? I've been working on a personal blog engine with this as one of the core ideas (basically all static assets and pages are compiled on edit), and I thought it was a pretty obvious way to go about it. Looks like I'm indeed not the only one to think of it, but how "new" you present the idea as is a bit surprising to me.
It also turns out that content sets that change infrequently, but also unpredictably are a pain to cache. You can cache them for a short time (as long as stale content can be tolerated), but then you lose cache effectiveness. Or you can cache it forever with some sort of generation/versioned cache, but that doesn't interface with named, public resources very well. Telling your visitors and Google that it's yourdomain.com/v12345/pricing not yourdomain.com/v12344/pricing doesn't really fly.
I definitely concur with your surprise about it being novel though. I think that for many situations it's just easier to run extra boxes to handle the increased load of generating dynamic content on the fly over and over again. It's good for SuperMicro and AWS. It's not so good for the planet.
I'm very excited to see Jeremy's approach to addressing the problem.
In a blogging context, stuff like Wordpress where pages are generated per request, then cached to handle any form of serious load just rubs me the wrong way... Such an infrastructure to display a few pages seems ludicrous.
> The default would be to bake out static resources when data changes, and you'd want to automatically track all of the data flows and dependencies within the application. So when your user submits a change, or your cron picks up new data from the FEC, or when your editor hits "publish", all of the bits that need to be updated get generated right then.
... so this is exactly what my WIP custom blog engine (ultimately meant to replace my posterous blog) looks like, initially composed of markdown source and makefiles, then ramped up to some rake tasks and a ruby library. An entity change (edit post, add comment...) should trigger generation of each page referencing it exactly once, and possibly immediately.
I'm not trying to pick a fight or anything, but it sounds like you're arguing against lazy loading?
Eager caching is very situational, and not something you want to do unless you can reasonably anticipate the thundering herd or have very few items or have unlimited resources to generate and store a complete cache.
I'm probably misunderstanding though.
Random piece of feedback: it's weird to use data-model_id (instead of data-model-id). I assume you're trying to match some pre-existing naming convention (but JS tends towards camelCase anyways...), but I think it would be better to go with dashes as that is HTML attribute standard. That was the only part that looked sloppy to me.
Another thought: Did you guys experiment with event-based logic for postRender instead of pre-defined method hooks? I find the pre-defined method approach hacky feeling.
I have thought about this one too in my own applications, and I seem to switch back and forth.
The good thing about using underscore is that the variable name can match on both sides of the expression:
var model_id = $foo.data('model_id');
var model_id = $foo.data('model-id');
I am trying to use camelCase for new code in other parts of the code though, as it seems to be the general convention in JS. 
Or you could say I'm not forced to use it, but now I'm using camelCase for variables and underscore_case for data attributes... so the two don't match, so why have the non-standard underscore_case at all.
(This would be an example of one of the things that makes libraries feel cludgy, and one of the reasons I love how consistent Backbone is, like I was talking about further up the thread with Jashkenas.)
I'd love to help with the transition to Redis if I get some free time in the near future, assuming you're still looking to go that route.
Questions about session state management are popping up in my head but maybe that's some of the secret sauce in the Rendr portion of Airbnb's app.
Great work guys, this is really pushing the boundaries of full stack app development with js.
That's why I've been using Express for my middleware, npm modules as needed, and mongoose as my db wrapper (or if I want to switch it out for redis, I can do so easily). Not sure how easily I can port my "meteor app" to a different framework that comes along.
Seriously, the sheer amount of work that you end up NOT doing almost makes it feel like cheating. I can completely understand why people don't like it. I fully intend to build something in Derby soon, and to examine Rendr. But I'm wanting to build an enterprise focused metadata management app, and Meteor seems to me to be the right way to go, because at the end of the day I just want to get something built that provides a capability.
To me, it's a lot easier to "grok" exactly what's going on when you're using a simple middleware layer and a simple socket framework, than when you're using a full-stack solution like Meteor.
A web app that runs completely on CSS.
No need for stupid web servers, but since web servers are handy we'll build one with css.
And I scoff at HTML; but because of performance I made css compile to html too.
I'm working on a project that'll I'll be unveiling as my 'open source master piece'; It's just a little thing I call node.css. That's right, css bindings to C++. No more stupid C++ either.
Strap on some CSS build automation and what do you get? That's right, the holy grail. I'll call it CSS on Rails.
What's more, I've already done it and launched my current employers flagship product on it. Hope it doesn't screw the entire business over the long haul. Oh well, I can switch jobs if that happens and pretend I never posted this.
Not trying to troll, just thought I'd throw this perspective out there.
I think us developers are running into the same issues or pitfalls that we ran into during the rise of flash, where we move everything to the client because we can. The problems that Rendr is solving seems to be the same problems caused by giving too much responsibility to JS.
I like to think of Rendr as just another Backbone app, that happens to be able to serve HTML from the server as well.
I think I'll understand when I run into the problem Rendr solves...
Server-side gives you fast page load times.
Client-side gives you fast user interaction times.
I'm thinking specifically of the search page example that was represented as a screenshot in this blog.
That said, very interesting article.
I wonder.. Has anyone tried rendering a page, and then bootstraping your initial Backbone models pulling data from the HTML?
Doesn't have to be 100% Node.js front to back (you can still write your APIs in another language), but Node provides that handy bridge to get the server side rendering of client code.
We avoid this with Rendr by defining routes, controllers, models, etc in a way that can be run on both sides.
I'd just like to note that those working on Backbone single-page apps hosted by an ASP.NET site can also achieve this idiomatically, with the use of the Nustache view engine and controller actions that can do HTTP content negotiation.