Hacker News new | past | comments | ask | show | jobs | submit login

I understand the motivation for this kind of stuff, and it's neat, but I'm wary of it because of the additional complexity it introduces for a relatively small benefit.

I may be misleading myself, but it's rare (on a desktop browser, at least) that it's the page rendering time that I really notice: far more significant is usually the latency, or the time taken to transfer the significant proportion of a megabyte of HTML that's smothering a few kilobytes of text.

On the downside, it replaces something that just works with something that ... mostly just works. See elsewhere on this page: "Loads blank white screens in firefox 15" / "This is now fixed". And that's the problem: you've replaced something that works in every browser with something that you have to (or someone has to) test in every browser, and whose failure modes must all be handled. What happens when you click on a "turbolink" on an overloaded server, for example? My experience so far has been that this kind of enhanced link is usually faster, but the probability of nothing happening in response to a click is not insignificant.

I'm aware that I probably sound like an old grouch.

You're right that if your server is very slow at generating the response, turbolinks is not going to add much in terms of perceived performance. Same is true if you download multi-megabyte pages. So don't use it for either of those cases!

We use it for pages that have low latency (50-100ms) and a light weight (<100KB) for Basecamp. It makes a big different for apps built like that. This is a project that works with and encourages you to build apps like that.

No web service is immune to fluctuating network conditions. How can I tell if my browser is loading a turbolink? What happens when the request fails?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact