Hacker News new | past | comments | ask | show | jobs | submit login
Turbolinks for Rails (like pjax) (github.com/rails)
82 points by timf on Sept 26, 2012 | hide | past | favorite | 52 comments



I understand the motivation for this kind of stuff, and it's neat, but I'm wary of it because of the additional complexity it introduces for a relatively small benefit.

I may be misleading myself, but it's rare (on a desktop browser, at least) that it's the page rendering time that I really notice: far more significant is usually the latency, or the time taken to transfer the significant proportion of a megabyte of HTML that's smothering a few kilobytes of text.

On the downside, it replaces something that just works with something that ... mostly just works. See elsewhere on this page: "Loads blank white screens in firefox 15" / "This is now fixed". And that's the problem: you've replaced something that works in every browser with something that you have to (or someone has to) test in every browser, and whose failure modes must all be handled. What happens when you click on a "turbolink" on an overloaded server, for example? My experience so far has been that this kind of enhanced link is usually faster, but the probability of nothing happening in response to a click is not insignificant.

I'm aware that I probably sound like an old grouch.


You're right that if your server is very slow at generating the response, turbolinks is not going to add much in terms of perceived performance. Same is true if you download multi-megabyte pages. So don't use it for either of those cases!

We use it for pages that have low latency (50-100ms) and a light weight (<100KB) for Basecamp. It makes a big different for apps built like that. This is a project that works with and encourages you to build apps like that.


No web service is immune to fluctuating network conditions. How can I tell if my browser is loading a turbolink? What happens when the request fails?


DHH said on Twitter "this will ship as default-on in Rails 4.0 Gemfile" - https://twitter.com/dhh/status/251024691337244672


Really looking forward to Rails 4.0. After the big stuctural changes, it's nice to see these additions alongside with the famous doll-caching. It's also clear what Rails tell the Rails-is-only-an-API-layer-for-single-page-apps-haters:

Yes you can write single page apps, but with Rails you get the speed of single page apps with the ease of traditional Rails


I've been playing with pjax lately, and it works really well - there's a very subtle improvement with rendering pages, and it does feel faster. Looking forward to trying out Turbolinks.


So glad this will be default in Rails. I spent ages building a custom version of the same kind of PJAX framework, using headers to transmit flash messages, interpreting XHR calls in new and exciting ways, etc. It really needs to be standardised for modern web apps.

If Rails 4 is all about PJAX, my hope for Rails 5 is it's the one that embraces fat-client JS apps and exposes JSON services in a standard way, so they can easily be consumed by convention-based frameworks such as EmberJS.


> my hope for Rails 5 is it's the one that embraces fat-client JS apps

Pretty sure DHH would say not a chance.


Rails is great for fat-client JS apps as it stands today. Everything on the controller and model side of things makes for a wonderful backend for fat-client JS apps.

But if you're going to use Rails for the view, the focus will remain on making Basecamp-style apps (turbolinks, russian caching dolls, etc).


I dont know. DHH has shown tremendous capability to say out with the old when he comes around to a viewpoint. And the old model of UI logic on the server is already in the phase of a slow decline at this point, imo.


He is not into client side javascript yet. https://twitter.com/agd108/status/211115804882763776

I was asking him about http://joosy.ws. I was excited about their approach of tying the client side javascript UI into rails. Not just using another generic javascript UI framework and using REST models, but tying into the models of Rails and then making an intelligent linkup to the javascript front end. I think the approach could be the future.


I like the idea, although I don't like that someone has to write this for a framework. I mean, ideally this would be built into browser in some way. Recompiling cached assets seems like a waste of resources. Even re-executing JavaScript on page load could be controlled via some link parameters or headers.


In terms of pushState, I think it is, and the framework only truly exists to abstract the differing implementations across browsers. But naturally, when a page is refreshed, the code has to be re-executed, because you're resetting state and performing all sorts of setup and initialisation.

That being said, and I'm only putting this out there, but lisp-esque continuations in the browser for this sort of stuff?

Also, part of me thinks that if you require this sort of thing, Rails is the wrong tool. Rails is great, but a full stack framework is pretty intensive for this sort of thing.


I think the thing about turbolinks is you get a lot of the advantages of single-page apps but you can write it like a normal Rails app.


This is a little like jQuery Mobile's ajax loading strategy? Though I like that it's just links, as doing ajax form submits in jqm seems like it always needs data-ajax=false with ajax forms needing to be very well thought out.


It instantly reminded me of JQM's ajax loading too. I've had good experience with it, but the framework (JQM) feels quite heavy and definitely an overkill for something like this. If turbolinks provides just this functionality and with much lighter-weight (and hopefully similar browser-support as JQM) this could be quite a promising approach.


I would really like to see an option in Turbolinks to be opt-in instead of opt-out. Instead of applying the pjax-type behavior to all links unless they have data-no-turbolink, only apply to links that have data-turbolink (or ideally a completely unobtrusive option to do $(node).turbolink() or $(node).data("turboLink", true) or something. Although this is made more complicated by turbolinks intentional lack of jquery dependency).

Probably not too hard to add it to it. Maybe i'll get around to making up a patch myself if nobody else does.

But I'd be a lot happier with turbolink on a link-by-link opt-in basis.


Tried this out and noticed that it maintains scroll position between pageloads (scroll halfway down a page, click a link, new page loads halfway scrolled down). Figured it was a bug, but then saw this item in the readme's To Do section: "Remember scroll position when using back button".

Is it a widely accepted convention to maintain scroll position between PJAX page loads? When a large portion of the page content is changing, one might expect the page to scroll back to the top.


Unless I'm missing something, it seems that the goal would be to maintain the illusion that different pages have loaded, and therefore the back button would bring you back to where you were on the previous page (as it would for non-pjax/non-Turbogears pages).


Ah, so I misunderstood that todo, but the issue still remains that scroll position is remembered when going to a new page.


This is interesting, but what happens to the javascript execution environment? Does it get reset, or does it inherit all the objects that were instantiated in the old page?


It's part of [pjax](https://github.com/defunkt/jquery-pjax). pjax works by returning all content stuffed in a single node, which then updates the current page with those contents. It fire's Google Analytics updates, etc. It can also change the page title.

The matching nodes from the old page are replaced.


How do you handle JS tied to document.ready?


Works great, but yeah this is the only thing preventing me from pushing it live right now (I already moved any page-specific JS to inside the body tags instead of in the head).

I imagine you have to rebind all your $(function() {}) code to the page:update event (and maybe also leave it bound to document.ready for the initial page load??), but I'm not quite sure the best way to do that.

Edit: ok is rough but it seems to work

    $(function() {
      alert("document.ready");
    });
    $(document).bind('page:change', function() {
      alert("page:change");
    });


I tried it out and binding to the pjax:end works. So just wrap any code called by document.ready into a separate function. Then bind it to both document.ready and:

  $(window).bind('pjax:end', ->
    # document ready code
  )


Hmm, I wonder if there's any way to have an event on page:change that somehow _sends_ a document.ready trigger. So it will trigger all document.ready hooks, without any changes to the code hooking into document.ready (which of course could come from third-party dependencies you can't easily change to use page:change themselves, and which may not want to be changed to have turbolinks-related code in them).

Might be easier if your code exclusively uses jquery's ready rather than native DOM onload or whatever.

But yeah, either way, this is a bunch of added logic that can go wrong, I'm not super enthused about it myself.


Hmm that doesn't seem to work with turbolinks. page:change works, but it fires on every change, not just when the body contains the event. I tried making a pageLoad() function on pages that need extra functionality and then putting the following in the head:

      $(function() {
        pageLoad();
      });
      $(window).bind('page:change', function() {
        pageLoad();
      });
However, it calls the originally set pageLoad() on every subsequent update. Maybe if there was some way to clear the function before it replaces the content?


From a page loaded with pjax? You wouldn't. You'd have to watch a different object.


Thanks, but I wasn't thinking about the DOM nodes, but JS global variables. I guess they wouldn't get reset, would they?


I dislike that this small gem is written in coffeescript.

https://github.com/rails/turbolinks/blob/master/lib/assets/j...

For various reasons we don't use coffeescript in our projects but now if we want to use turbolinks (which sounds great) we'll get a coffeescript dependency.


Putting the ab initio strangeness of having JavaScript code in a Ruby gem aside -- what it happens to be written in shouldn't be an issue. If the gem is released with the compiled JavaScript inside of it, then it can just be used, without any additional dependency.


I checked, the gem only has the Coffee source, no JavaScript. This frustrates the hell out of me -- clearly the message that you should ship JS is not getting through. When CS can do source maps, ideally people will ship the source, compiled JS and map together.

Having either in a gem is rare, but sensible if the gem wants to include a client to interact with the service it provides.


Good point.


Guys hang on. When Rails assets are compiled all Coffeescript is converted to JavaScript, which is then compressed and minified. So unless you are writing any Coffescript yourselves you will never have to deal with it. Asset pre-compilation has been a Rails feature since Rails 3.1.


Rails already ships with default-on coffeescript.


Yes, like turbolinks it's a default dependency in the Gemfile.

But e.g. jquery-rails https://github.com/rails/jquery-ujs is written in JS not CS, so I can use all the jQuery integration in Rails without using CS.

With turbolinks being written in CS if I want to use the full power of Rails+JS I need to add CS to my project.


or, you could just compile the coffeescript into javascript once and return to your normal javascript flow


You can even write your Javascript in Javascript, it should still work.


How would this work with something like Angular JS? I'm guessing that JS frameworks that do DOM manipulation won't be very happy when all of their DOM references just vanish - I'm guessing there's also the possibility of leaking tons of memory here if the event handlers aren't unbound.


Seriously. I spent many hours building my own pjax like framework with pushState, and what I've done is (quote) "worry about what element on the page to replace, and tailoring the server-side response to fit, " . A standardized way of doing this would have been so much better.


Agree with this.

From what I can think there will be two broad use cases:

1) Sibling pages within a section will often share RHS and header content. So makes sense to replace the inner area where the actual page content lives and leave the rest alone.

2) Cases where the above doesn't hold or it's too complicated to care about, so load in the whole body.

I'm sure DHH chose the #2 rule because #1 turns out to cause a can of worms?


Looking forward to some blog analysis on all this.

And the inevitable vs PJAX conversation (ie: is it flexible enough? Does it need to be?)



It really feels fast and good! I'm evaluating the impact on my projects with a simple bookmarklet loading a js version of turbolinks (js is not always up to date) -> https://gist.github.com/3793194


Works fine and chrome, and yes it is very fast. Loads blank white screens in firefox 15 on Ubuntu.


This is now fixed.


How does it work when the pages have different javascript & CSS links on them?


pretty sure it doesn't.


I see it is on the TODO list:

> Work left to do

> CSS/JS asset change detection and reload


how about, say, RSS/Atom feed autodiscovery links in <head>?

how about <script>s at end of <body> for load performance instead of in <head>?

there's always edge cases. it's asking for trouble, sez me.


I am curious at what default on means, such as will all links use this by default?


Correct. You can opt-out links with data-no-turbolink. Or, of course, you can opt out of the whole thing by removing the turbolinks gem from your Gemfile.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: