Hacker News new | comments | show | ask | jobs | submit login
Server-generated JavaScript responses (37signals.com)
122 points by steveklabnik on Dec 10, 2013 | hide | past | web | favorite | 96 comments



as I read that I kept wanting to say "No, we're so past that! i need client-side everything w/ json apis" (as I'm sure a lot of people and the mindshare of the interwebs are going) but then I read the last part:

  > If your web application is all high-fidelity UI, it’s completely legit to go this route all the way. You’re paying a high price to buy yourself something fancy. No sweat. But if your application is more like Basecamp or Github or the majority of applications on the web that are proud of their document-based roots, then you really should embrace SJR
and I realized that he's completely right, the majority of the web is still document oriented "pages". If that's your case, don't try to be an "app" and the 37signals way works just fine for you. in other words, it would not be a good idea to make a blog be a Single Page App (I'm looking at you blogger).


I'm sure there will eventually be a happy medium where a page/app is rendered on the server, but upgraded asynchronously to be as dynamic as it needs to be, so we'll see the convergence of apps and pages. The return of progressive enhancement, I suppose - but even better because of pushState, indexeddb, etc.

That's if there's truly some benefit. Server-side rendering has its place, but probably isn't important for apps on the very "app" side of the content-app spectrum. What makes me want very easy to use server-side rendering, even for apps, is the fact that so many of those apps have embedded content - say an email in Gmail - that it'd be great to reuse the same templating if it's later deemed important to have a link to a static version of the content "outside" of the app.


So ... let's deliver our HTML templates pre-filled with HTML data. Sounds like the best of both worlds to me, and less duplication of work if the string concat is already done for you ;-)


I find the best approach is to render the initial view server-side and then render any future requests client-side with data from your API. That way your site will be super fast and can be indexed by search engines.

If you're using node.js then you can even use the exact same template rendering code.


Did you read the article? They specifically address this:

This means it might well be faster from an end-to-end perspective to send JavaScript+HTML than JSON with client-side templates, depending on the complexity of those templates and the computational power of the client. This is double so because the server-generated templates can often be cached and shared amongst many users (see Russian Doll caching).


You can even use the same templates on the server and client with python server side using Jinja2 and Nunjucks.


If you're just looking at a web page then maybe sometimes server side rendering is a good way. But as soon as you're having a mobile app and a tablet app, and partners that want to read your data, you realize that what you need is an api. And once everybody except your website uses this API, then you start to wonder.


Can't your web server also consume your API? If SJR can improve performance that seems like the best option.


I've had exactly the same realization. Too often I am working with different embeddable widgets with dynamic functionality forgetting that a lot of the webapps out there still work with the old CRUD app model where you list items in a table, select one item, edit, repeat.


This is all well and good until you want to handle error responses on the client. Let's say your user is on spotty wifi and they hit the submit button for your form. Guess what happens if that AJAX request fails? Probably a whole lot of nothing, unless you've got some templates client-side to show an error message. At that point, if you're making your error messages fit into each of your UIs, you'd might as well just be using client-side templates anyway and SJR is moot.

On top of that, you can't update the UI in a meaningful way until you get a response from the server.

That's also not even touching any security considerations that you need to make to use this technique: you can't implement a CSP, you've got to make damn sure you're properly escaping every piece of data [in a special way] that comes through, and you've got to make sure the response that you're sending can't be used in a script tag (i.e.: you need to add an intentional syntax error that you strip off) or an attacker could simply put a script tag on his own site pointing at a URL on your site that returns sensitive information.

TL;DR: Badasses only.


This is spot-on.

I worked with someone who helped develop AJAX interfaces before it even had the name, back when XMLHTTPRequest was brand-new, and they built a product similar to this in philosophy -- must have been around 2000.

I worked with a similar model as well on a couple apps, until finally migrating entirely to client-side rendering, and I would never go back. The complexity of server-side code having to write out client-side to update things just defeats you after a while, it's hard to keep a separation of concerns, and things start to turn into spaghetti despite your best intentions. Refactoring becomes increasingly difficult.

But most of all, it's just like you say -- you can't do error handling, or change the interface before you receive a response from the server, or deal with simultaneous requests, or probably 20 other things.

I'm honestly really surprised to see someone recommending this today. Dealing with all these huge numbers of stand-alone snippets which update certain parts of pages in certain conditions, is just an organizational nightmare.


Actually, if you take this approach but with a twist, it's quite possible:

As was shown with Google+ widgets for performance optimization, I.G.'s talks on High Performance Websites and chatter from Polymer-using folks, deliver your first page, your template, using HTML. Pre-render things, but then make it interactive on-focus (if widget), or load your JS and continue loading data (mobile) or take your HTML and use it as starting point for a really nice template and loading framework for widgets (Polymer).

The point then is that JSON is rather useless. It's not optimized binary for machine consumption, neither is it display-optimized DOM, in which styles and content can mix and interact.

Oh and failure states are entirely possible to show. If you want a dumb failure state, code that for use when you introduce the loading spinner as a timeout helper. For more advanced failures, write the logic client-side as you would normally, then deliver and modify the HTML again as you would normally.

No one says that because your primary communication mechanism for server updates is HTML that you can't in turn use JS on the HTML to provide updates between updates, as it were.


I think the reason they do this mostly boils down to wanting to write ruby. If there was a native ruby engine in the browser, i'm pretty sure they would do everything they could client-side.


And initial latency/page weight?

Even wikipedia would work great as a client-side app with json, bu then you want to open 30 tabs and you can feel it lagging.


I think more so than the error case (which is hopefully rare), you lose any possibility of optimistic rendering, where the user sees an immediate response (i.e. the message is added to the UI) while the server updates in the background.

The way this is set up, you must wait for an HTTP request to complete before any UI updates. I don't care how much caching you do, that will always be slower than immediate client-side update.

I really think the best way is to have both - send HTML to the client on page load, and get JSON for every subsequent request.


But that's simply not true, is it? You can already display something while waiting for the HTML just as you can while waiting for JSON.


I've been experimenting along these lines with making Rails' partial real-time using websockets and it works quite well for Basecamp/Github style apps. Basically you can get "true" MVC while using your existing erb/haml views and you get real-time updates for all connected clients. Some web apps that are replicating desktop like behavior require full client-side MVC, but I think many apps can hit a sweet spot and get the best of both words of server-side rendering with realtime updates.

project: https://github.com/chrismccord/sync


Hey thanks for sharing that, I think it's a really interesting middle ground.


sync is amazing. just wanted to say thank you :)


I'd much rather use templates that can be rendered server side as an optimization if necessary, and then updated normally on the client.

As for the criticism of SPAs that you need to download the "entire" JavaScript library is loaded, this can be mitigated by lazy loading the dynamic bits. If you look at how Google+ behaves, it renders the page server side, and then loads controllers for various parts of the app on demand. For such a complex app it loads incredibly fast.

It might be a little bit before there's a server-side renderer for custom elements / Shadow DOM / template binding, plus patterns for deferred loading code, but personally I think that will be the way to go in the near future.


> It might be a little bit before there's a server-side renderer for custom elements / Shadow DOM / template binding, plus patterns for deferred loading code, but personally I think that will be the way to go in the near future.

There won't be. These guys are just desperately clinging the past. They should be writing the Ruby on Rails for Opal instead of trying to extend the life of something that, frankly, has seen its day.


> These guys are just desperately clinging the past. They should be writing the Ruby on Rails for Opal instead of trying to extend the life of something that, frankly, has seen its day.

Care to expand? I'm confused what you mean.


I mean while the rest of the web world has moved on to the client being the center of the universe the 37Signals guys are busy trying to make sure they can still work the same way they have since 2005.


i am all for SPAs an love angular, but in reality >90% of the web world is still using the document orientated model of 37Signals. That will change eventually, but its a long way to go.


I'm quite sure there will be. Everyone I know working on those standards is interested in seeing server-side rendering, it just a matter of getting there.


Not sure I got that - that sounds an awful lot like JSF, which should never have been invented.


   lazy loading the dynamic bits
Are you talking about RequireJS AMD?


No, I'm talking about the technique, not a specific library.


There is this interesting pattern of having template compilation be possible from both the server and the client, such as with AirBnB's Rendr. It gives you the flexibility to choose where you want that template compilation to occur.

With regards to Twitter's Time To First Tweet being slow under their Single Page App, I recall Alex Maccaw mentioning that their JavaScript library execution was inefficient, in that the client had to download a whole bunch of JavaScript before it would load the part of the app responsible for rendering the data.

I believe he was suggesting that if Twitter optimised the delivery so that parts of Twitter's JS were served after the rendering part of Twitter's JS code, then the time to first tweet could be faster.

In my opinion, delegating template compilation to the client offers a nice separation of concerns; the server is the API, and the client is the UI.

It would be interesting if there was benchmarking done into comparing these approaches, to see where server-side template compilation can be beneficial over client-side template compilation.


Exactly, RequireJS if you need to load parts of the client dynamically but use plain old HTML for initial page delivery if you have a lot of results.

It is also what is recommended for Backbone, for example:

http://backbonejs.org/#FAQ-bootstrap


RJS was the most awful experience of old Rails. This comment has no other utility, it was just really ugly and depressed me and I've never gotten to say that before.


Twas but just a blip in the otherwise excellent timeline of ever increasing joy and happiness.


Ever increasing happiness brought by an Omakase meal.


"The template is just JavaScript instead of straight HTML" I guess that's one way to make this work. But for everyone that is serious about writing a web app with a slick UI should build things around api calls and move the views to the frontend (backbone.js & handlebars templates are great).


Yeah, did you finish reading? He covers that. The cost of doing so might make sense for your projects, but so many people jump all the way to doing that when their "app" is still mostly document-based and it's the wrong way to go.


Yep, I finished reading but I disagree with this approach. Even in a mostly document based project I would recommend using frontend templates. Fire and forget type actions are just really easy to do and the whole thing just makes more sense IMO


But if you're building Document-based CRUD app number one-billion-and-five where rendering templates is a significant portion of your app's execution time, doing that in the frontend means you cant cache the output, so every render for every user forever has to do all of the work. Sure, it scales well in terms of user volume because you will always have roughly as many resources to process templates as you have users, but it doesn't scale as far in terms of data size and template complexity because when if it gets slow there's nothing you can really do.

Doing it serverside with russian-doll caching means that each template only gets rendered once for a given set of inputs and (at least for most CRUD apps I've ever built) you have huge cache hit rates, so it'd mostly be people pulling static HTML out of memcache all day. Crazy fast.


This seems like a poor defence.

Why couldn't you just memcache the json response for a given set on inputs?

Template rendering is never, ever the slow bit in a CRUD app. Ever. At its core it's simple string concatenation. Getting the data, the SQL, is almost always the slow bit unless you're doing some serious computation or file jiggery, which is still not template rendering.


My Rails CRUD experience is the opposite. Template rendering always dominates database time. Without caching, rendering templates with nested partials is generally 5-10x slower than data access. Typical times are 20-50ms for ActiveRecord, 100-300ms for views. This is in production environments with production databases, Rails 2-4, even with alternative template engines that optimize for speed.


Your experience is colored by the language / framework of your choice. For example, look at the fortunes benchmark in the techempower benchmarks. Rails makes a good showing for a dynamic language framework, but it's clear that an order of magnitude better performance is possible on the same server hardware with java or c#: http://www.techempower.com/benchmarks/#section=data-r7&hw=i7...

So, someone using java or c# could very well say view render times don't matter, because for them that is true and their bottleneck is indeed the database.


Err, as a comparison, if my C#/ASP.Net MVC responses takes more than 50ms I start looking for reasons why it was going so slowly. An uncomplicated page will take 20ms or so.

And that's on a small VPS.

But even going to Basecamp, the entire time waiting in chrome (which is basically the entire server-side run time + a few ms) is between 150-200ms and that's on what I assume is a fairly busy server.

So I suspect you're doing something wrong.


> going to Basecamp, the entire time waiting in chrome ... is between 150-200ms

The numbers in my comment are "without caching". Comparing them to Basecamp, which caches views and fragments heavily, is apples to oranges. Once I add caching, my typical response times for a cache hit are in the 10s of milliseconds.

You made an Amdahl's law argument that view caching is fruitless because rendering is an insignificant part of total response time. So I responded with uncached performance to show why Rails needs view caching.

It's no accident that Rails has comprehensive caching support; that the Rails team has worked hard to refine and optimize caching in each release; and that DHH writes about it so often (including in this article). You can't have performant Rails without view caching because rendering is dog slow.


I seem to remember the view time including ActiveRecord method calls, assuming that you use ActiveRecord objects in your views.


I suppose you could move your entire controller server-side and share a cache for the complete composed JSON documents such that you do a single HTML page request followed by many cached JSON requests afterward one for each "page", but if a page always gets rendered to HTML, you'd only be skipping the last step.


37Signals are optimizing on ms here, re their previous articles on caching in Basecamp Next. So I think you are both right. Yes, it is nice & semantically correct to have client side templates but if you have an app under heavy load, you can save up some time from client side processing by rendering a piece of HTML on the server. Would not want to maintain their code though.


is this secure by default in rails yet? i find it surprising that these techniques are promoted at the same time vulnerabilities are being publicly disclosed:

https://groups.google.com/d/msg/rubyonrails-core/rwzM8MKJbKU...


I believe the fix for this (checking if the request is xhr) hasn't been committed yet.


Is that completely adequate? There was an earlier round of changes due to attackers being able to forge the .xhr header on requests. (This was the patch set at which Rails started checking CSRF tokens on .xhr? requests; before that, they got a free pass.)

See http://weblog.rubyonrails.org/2011/2/8/csrf-protection-bypas...


Is there a way to check that which can't be faked by altering the browser or a js framework though?

I was under the impression that trying to validate that was ultimately as fragile as checking the user-agent string...


It relies on a header, which can't be set through the attack vector, so it's all kosher.


Since we are on the same page, could you help me in this discussion with nzkoz? https://github.com/rails/rails/issues/11509 we're talking about different things


To me, this seems really convoluted - maybe as a non-rails developer I'm missing some key information?

Why, for example, would you use this over client-side templating and data-binding? Create the template once, grab some data with AJAX, bind it to the template...


I'm a rails developer, and it seems really convoluted to me too. But it seems to work for them and to give them some good performance characteristics. To each their own and all that. I do think that there's a good deal of "splitting" going on in the rails community - I don't feel like I am using the same rails as DHH is talking about in this article - but I'm not sure if it's such a bad thing. It's tricky to pull off two (or more) "stacks" without a lot of configuration, but at least from my point of view, configuration hasn't become a pain point.


Then your initial page hit requires 2 round trips... one to download the page and the javascript, another to get the ajax, and then also a javascript based rendering step.

This way, the rendering is done server side so the initial html comes with the first request, and ajax updates are still generated using the same server side code path, just sent back inside of some javascript.


It's not much work to include the "first page" of data as a JavaScript variable declaration embedded in the response to the initial page hit to avoid that second round trip.


Backbone does in the docs recommend "bootstrapping" your data to eliminate the round-trip to get the initial JSON. I personally don't usually do that myself since the call to the server for JSON doesn't have a significant visual effect (to my eyes anyway). Whereas two different ways of loading data creates more testing and maintenance (on the server as well as the client)

Something about sending Javascript from the server seems like mixing up the application layers and creating dependencies that shouldn't' be there. I can see it perhaps if the code that you "own" is 100% on the server. Maybe I'm old school and it's just something that will have to sink in a bit before it makes sense to me.


DHH, the author of this post and Rails itself, is and has been actively campaigning against client-side applications, as they clearly undermine his server-side ecosystem.


He's set for life and rails is his pet project. I really don't think he cares if people move to alternative frameworks.

I'm happy they continue to support methods like this though because I REALLY enjoy being able to put up a site that works perfect without javascript, gets crawled by everything and has a fantastic user experience for people with JS without having to worry about duplicating "MVC logic" on both sides, messing around with having to render something in 2 different template languages or add a lot of extra boilerplate.

Everything just works and is lightning fast.


I'm the author of this comment: https://github.com/rails/rails/issues/12374#issuecomment-294...

I still don't get the benefit over using RJS (or SJR, or JSR, whatever) instead of render :json : with the former you have the javascript spreaded in your views, with the latter you can organize it inside the assets, which IMHO is a way better solution.


It's not really mixed in with your views. You would put in a .js file that has nothing but javascript in it.

Some benefits I see are:

1. It just works perfectly with and without javascript which means your stuff gets crawled by search engines and it works great for people without JS. Then if you have JS enabled you get the enhanced UI.

In #1's case I do this all the time for search boxes. In admin dashboards that I make with rails I allow people to search for data.

It's trivial with rjs to implement a solution that works with no javascript and then if you have JS enabled you get your search results shown immediately without a full page reload.

2. You can tell the rjs code to use server side (erb) partials that you already have created so you don't need to have templates made for both.

Or in #2's case if you just use jquery without using any template engine it gets super messy trying to append elements to another element with inline html strings. All of that is eliminated by using your existing erb partials.


I mean that you would have JS in your app/views folder together with the HTML views, instead of the assets folders only.

1. It doesn't depend from RJS/render :json, but from the fact that your controller responds to sync requests, in addition to the async ones.

2. You can tell to render :json to use server side (erb) partials:

  render json: { html: render_to_string(partial: 'product') }
I do it all the time


I didn't know about #2, that's really cool. How does render_to_string handle cached partials?

Does it still need to compute the output of the partial's string representation on every request?


I've done this in PHP as a hack but I didn't realize it was supposed to be A Thing, with an acronym and everything.


haha, yeah that was my thought exactly :)


There is an interesting web framework that takes this approach as well -- N2O.

http://synrc.com/framework/web/

It can do server side rendering but then pushes it via a websocket (well with fallback) to the client.

The use case I understand is mobile clients.


The thing about SJR is that it lends itself much better to progressive enhancement: if you need/want to present a page which also works without JS, that means that you need to be able to render the page on the server, which means that by using SJR, you can easily reuse that same view code that you already need to have for the non-JS clients.

Of course, as JS on the server becomes more and more wide-spread, you might as well just use the same templates at both ends, but it' still more infrastructure than treating the client JS as just part of the view.

I discussed this back in 2011 on my blog: http://pilif.github.io/2011/04/ajax-architecture-frameworks-...


>> The combination of Russian Doll-caching, Turbolinks, and SJR is an incredibly powerful cocktail for making fast, modern, and beautifully coded web applications.*

I'm personally not against SJR and I haven't really gone whole-hog into any JS framework (Angular/Ember/Backbone) ... but almost every RoR developer I've talked to is disabling Turbolinks on all their Rails 4 apps. I personally find it slow to load/render in dev mode, which leads me to believe it would be confusing for users in production.

Is anyone using/liking Turbolinks for Rails 4 apps?


I use it in a small app (rails 3 though) and while it may be faster, the issue is that it does not follow typical UX of a website.

Usually people: Click link

White screen / loading (computer is doing something)

Next page progressively shows up

Turbolinks is different

Click link

Nothing is happening...?

Next page appears all of a sudden


In my experience, no. I've written three RoR 4 apps since turbolinks was released and I've disabled it in all of them. None of my 6 ruby coworkers use Turbolinks either.

I actually liked turbolinks, makes the page feel much quicker, but just too many quirks to work around and I didn't really have time to dive into them.


That's interesting - I'm extensively using Turbolinks, to the point where I back ported it to Rails 2.3, and to other frameworks.

It does a great job of eliminating the majority of page load times for apps with large amounts of CSS and JS. It definitely requires stricter control of Javascript, but what it enforces is an existing good practice (idempotent scripts) so it's hard to be annoyed.

I've also added a loading indicator in some places where I'm using it on a more "app-like" site.


Great stuff: we do ad-hock versions of this in a few of our applications. It deserves a formalization.

In my brief thinking about this, one potential version would use HTML 5 attributes like AngularJS, but use HTTP/restful endpoints as "the model". I can imagine a few different approaches, but something like:

<div data-dyna-src="http://myserver/my-div-endpoint" data-dyna-method="poll 500ms"> ... </div>

Which would then poll the given endpoint and swap in new content in a pluggable-but-visually-pleasing manner. http://myserver/my-div-endpoint would serve up the partial of the div, so everything would be DRY. Basically move the model back to the server, but still buff up the presentation layer a bit.

You'd probably need a few different patterns: updateable divs, forms, updatable tables, progress bars, as well as some good default transitions and, of course, make the whole thing pluggable, and potentially provide for both HTML and script interchange at the end point (that's what our ad-hoc version does: it provides a data channel, an html channel and a raw script channel, but it feels like a hack.)

Anyway, if anything is going to save us from the oncoming javascript-filled dystopian hellscape, it's probably something like this.


I thought that was called JSONP?

Anyway this is clever from a traditional web development perspective but not from a contemporary one. I like to use AngularJS (with prerender.io when necessary for server side rendering). I write all the non HTML code for both the front and back end in ToffeeScript which is derived from CoffeeScript.


This is made with a XHR request, not JSONP. JSONP requests are made by inserting a script tag and defining a callback function to get around cross site requests limitation with XHR.


... which now should generally be done with CORS, if you can.


As much as I love CORS, it unfortunately have a ton of browser compatibility quirks... And not just from IE.


Why didn't he use JSONP since he was executing the code right away?


Rails is a great framework. But posts like this and the one discussing how the team made product decisions around their caching strategy [0] and the horrible hacks that they did to make it work, for me are telltale signs that Rails is not the right tool for building web apps anymore.

Web development is evolving, as it ever has, and I believe that the next step is pushing the view layer to the client, and caching data, not html, on every step of the way. The major hurdle here is SEO, but I hope that when the right framework comes, it will offer an elegant solution to this problem.

[0] http://37signals.com/svn/posts/3112-how-basecamp-next-got-to...


I wouldn't say SEO is a major hurdle, there is prerender.io (http://prerender.io/getting-started#ruby) and countless other services which help (I made one).


what criteria is used to define if something is a hack? or a feature? I mean if there is a pattern that works for some people and provides secure and clean way to achieve a goal how can you say that its a hack? I mean where do you draw the line between a 'hack' or 'beautiful implementation' I would like to know.


Such things were popular more than 15 years ago, when websites had to conserve bandwidth (for many users with slow links) and JavaScript wasn't considered as dangerous as it is now (so people didn't use NoScript much and didn't frown upon JS-only websites like I do).


In general, is this a good practice? You do loose the ability to re-use endpoints on mobile.


We reuse all controllers and models for our mobile views and for our API. We just do that reuse at the server instead of through the client. So no loss there.


Why would you loose anything? If you want to use JSON on mobile you just need to add .json to any endpoints and JSON will be returned (If you do support it).

domain.com/posts.json

domain.com/posts/1.json

etc.


Isn't it impossible to optimistically update the UI with this model? You need to wait for a server roundtrip before you can display the results. There are certainly cases where that's OK, but it can make an app feel a lot less snappy.


Optimistic updating needs a serious amount of code and robust error handling on the client and is therefore non-trivial. This approach seems aimed to reduce the amount of work while being relatively fast.


Unless you abstract away the non-trivial part with something like Firebase :-D


Yep, that's a nice and promising technique. I'm currently working on a NodeJS project and I'm using EJS templates with Handlebars templates on the server-side and on the front-end. The benefit is impressive. I can deliver the same content on page-load and using fancy Backbone routing I manage to change dynamically the page without duplicating code logic. I'm reusing some components on server-side and client-side, so my templates will always render with the same context ( JSON ) but on different JS environments. Amazing how far can we go, just to satisfy Google and SEO guys.


I suppose this is okay, but it can quickly become a mess as soon as you need to make the modified portion of the dom interactive:

    $('#messages').prepend('<%=j render @message %>')
      .find('.delete').click(fn)
      .end().find('.reorder').hover(fn,fn)
I suppose they work around this problem with event delegation.

If you use something like angular, the whole mess becomes:

    $scope.messages.unshift(message);


actually there is a gem that solves that problem if you want to use SJR

https://github.com/xpdr/transponder

http://kontax.herokuapp.com/ - example app running on heroku

http://github.com/xpdr/kontax - example app on github


The web development world has been trending toward a Thick Client. This article is a call back to the Thin Client and centralizing all logic in the app server.

Most devs are still used to the Thin Client approach, so there is a productivity benefit to remain where you are familiar.

However, you can have just as many or more productivity enhancing libraries and practices in client side javascript as in the server side rails-like frameworks.


That made me wonder: Is there a general consensus that google will never ever be able to crawl single page applications ? Because after having played with a client side framework, i really see no other real reason to get back to server side rendering.

Looks like all this is really just a way to compensate for google lack of advanced technology on the crawling side ( i just loved that last sentence :))


I've gone to a few AngularJS meetups at Google and they seem to think it's not a big problem. The crawler executes JS just fine, it's just a browser.


I enjoy dhh articles, but find them a bit hard to understand not being a rubyist. dhh says:

2. Server creates or updates a model object. 3. Server generates a JavaScript response that includes the updated HTML template for the model.

Why does a template need to be updated? Isn't that part of the idea of templates, that they are fixed for changes in the model?


I don't understand what's the advantage of using SJR.


I too did not read the article.


> unless you make you’re doing a single-page JavaScript app

Yeah, cuz like no-one is doing that anymore. /s




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: