Hacker News new | past | comments | ask | show | jobs | submit login
Progressive enhancement is faster (jakearchibald.com)
58 points by bpierre on Sept 3, 2013 | hide | past | favorite | 29 comments



Progressive enhancement is a luxury, and not everyone can afford it. It vastly increases your test surfaces, and requires multiple designs for every page/feature.

If I'm trying to get a product out and I can reach 99% of my audience by assuming they have JS enabled, then I'm going to do that. I'm not going to spend 2x as long (at least) to reach that extra 1%.


You've misrepresenting progressive enhancement as "for people with JS disabled". I guess you didn't get past the first paragraph of the article?

Progressive enhancement actually decreases your testing surfaces by moving more logic to the server which is under your control, whereas the clients are running a variety of different implementations.


Progressive enhancement is a luxury, and not everyone can afford it. It vastly increases your test surfaces, and requires multiple designs for every page/feature.

Only if you're doing it wrong. (I.e. if you re-implement planned JS functionality in HTML, which sounds like what you're describing.)

Progressive enhancement done right starts with a fully functional prototype of the application done in HTML. Then you enhance your workflow and performance with JavaScript. Enhance, not re-implement.

One big advantage of PE is that it encourages semantic use of HTML, which allows for declarative and reusable JS libraries.

Every single person I've met who said it's "impossible" or "too expensive" never really tried. You don't sound like you've ever really tried it either.


Websites differ in their basic nature, and one size does not fit all. Roughly speaking, websites that are document-like (primarily consumption-oriented) should probably be progressively enhanced; websites that are tool-like probably should not (interaction oriented, like gmail, analytics apps, other saas). Websites that fall in the middle will have to carefully consider the user experience benefits and the available technical/operational resources.

The upside is that progressive enhancement can refer to a spectrum of techniques. Deliver only above-the-fold content as HTML, inline your CSS/JS, inline the initial JSON data, omit <form> POST support, etc. Two templating systems do not need to be supported. It's a straightforward technical problem to apply JS templates server-side (and I'm speaking as a boring old .NET developer--Nustache and Edge.js come to mind).

Btw, another benefit of progressive enhancement can be SEO.


It seems like the author is confusing progressive enhancement with progressive rendering.

Either way, to say that the Dale piece "conclusively" shows progressive enhancement to be "a futile act" is, in the OP's word, "misrepresenting."

"Exceptional" or not, the key is to recognize when you're dealing with one of those cases -- sites which, like Wikipedia, could be great on Web 1.0 browsers. In those cases, your focus is likely to be more on the content and its structure. In the "progressive enhancement est mort" view, you'll have to spend more time on engineering.


Progressive rendering is enabled by progressive enhancement.

I'm not convinced progressive enhancement is more effort unless you make it more effort. I covered these arguments and more in my previous post http://jakearchibald.com/2013/progressive-enhancement-still-...


> Progressive rendering is enabled by progressive enhancement.

Sometimes, but only by coincidence. The baseline of supporting JS off is that you have the render the entire page contents server-side. If the progressive enhancement is addition of content that is only supported by JS then you are correct. However the ideal progressive rendering is to send a generic static HTML shell with no personalized content that can be delivered instantaneously from a web server without any back-end chatter, then load in the dynamic content from the client side. Facebook is probably the most advanced implementation of this technique, and it yields a dramatic increase in perceived performance due to the fact that page starts visually appearing faster which psychologically extends the user's patience.


That doesn't sound like a definition of progressive rendering I've heard before http://www.codinghorror.com/blog/2005/11/the-lost-art-of-pro...


Read and learn young grasshopper: https://www.facebook.com/note.php?note_id=389414033919


Good article, but that's not the definition of progressive rendering


What's the precise definition? The browser rendering partial HTML as it is streamed down? That is just built-in browser technology to achieve the same goal. There's no meaningful distinction in terms of the end result.


Actually, I misread your earlier post.

Yes, what Facebook does is progressive rendering, but I disagree that it's the ideal since it's still blocked by JS. The ideal is serving HTML from the server which can get content on the screen before JS downloads.

"That is just built-in browser technology to achieve the same goal" - exactly. Tweetdeck does progressive rendering, but it avoids the simplest way of doing it and instead reinvents the technique with JavaScript. You can see what this does to performance.


The point is that the perceived performance of Facebook is impossible to achieve with server-side rendering.

Whether they should optimize for that 99.99% or the other 0.01% on philosophical grounds is a point you are free to debate. However the facts are the facts.


Okay, I got to admit: That was awesome. Well thought out.


The thing that always bugged me about rendering things in the client was...

1) supporting 2 templating systems (server & client) 2) no graceful degradation (or "progressive enhancement" depending on your opinion) (i.e. being able to get a page's content with a simple wget)

In any case, since it hasn't been mentioned in this discussion, I'd like to direct people's attention to PJAX (http://pjax.heroku.com/).

I've found this to be a nice, simple solution to have pages work identically with and without javascript. The initial page load is rendered by the server and the HTML of the subsequent sections of the page are rendered on the server but loaded via ajax and updated with one jQuery .html() call. The app URLs and the ajax URLs are the same but they return the page's full contents (<html>...</html>) when requested regularly and the page's partial contents (<div id="#content">...</div>) when requested asynchronously.

Check it out if you haven't.


That technique is exceedingly inefficient (download-wise) though, at least if your markup is anything but trivial.


We used to live with "exceedingly inefficient" full page reloads when we had dial-up, single-core computers and slow servers. And it worked. Now we have multicore computers, mufti-megabit DSL connections, cloud-based hosting and yet you present it as if a difference of couple kilobytes (which can be reduced to nearly zero by proper design) makes a life-or-death difference in website performance.


> yet you present it as if a difference of couple kilobytes (which can be reduced to nearly zero by proper design) makes a life-or-death difference in website performance.

That's quite the stretch, given what I wrote. I only said it was inefficient.

There's also something to be said for the fact that rendering templates on the server will make any meaningful client-side caching almost impossible. And mobile is the new dial-up; while some are fortunate to have mobile broadband, it's certainly not ubiquitous, and multi-core phones certainly aren't the norm either unless you only want to consider the HN readership for your sample.


This is a pretty good way of doing things, although I'd give older versions of IE the server-refresh version and get rid of jQuery, or at least use a cut-down build of jQuery 2.

If you're gzipping your responses (and you should if they're over a couple of k) the overhead compared to JSON is minimal.


1. I don't want to dabble with templating on a server. It's annoying, and separates two layers that I don't want separated.

2. It needs to work offline, and for me to support that, I would have to double up on work, and maintaining two levels of templating.

3. No framework has made this easy, in fact new frameworks seem hell bent on making it even harder. See http://bone.io

4. Telling us to "do that." is not going to make it happen. It has to be easier, and clearly it is not: Otherwise more developers would be doing it. I want someone to convince me, but I have yet to see a post going in depth on the technical implementation of such a solution (- that adheres to the 3 points mentioned above).

5. Document-oriented sites (blogs, wikis, maybe even forums) should never have been implemented with only JS in mind anyway.


About #3, DerbyJS [1] seems to precisely address this, while keeping concepts used by Angular / Ember / etc., like declarative events.

It seems like a huge step for me, I don’t understand why it’s not more popular on HN.

[1] http://derbyjs.com/#introduction


http://lanyrd.com/mobile/ works offline, without JS, shares templates between the client and the server for updating pages async and rendering offline. Doesn't use a framework so it doesn't have a huge JS payload.


Frameworks doesn't necessarily mean you're delivering a huge JS payload, but more to the point: How was that template sharing achieved? Would love some notes on the technical implementation.


The templates are mustache, delivered via a single JSON file (https://m.lanyrd.com/templates.v356.js). They're used on:

* The server-rendered web (python)

* The enhanced & offline web (javascript)

* The ios app, where native views aren't used

* The android app, as above

The client code is pretty dumb, it knows how to turn a link into an API call, the api response basically says "Render template x or equivalent native view) with this data…"


If your website genuinely has a good reason for requiring js then that's fine. If your buggy js gratuitously stops me from being able to access textual content, you're going on my list of people whose fingers need breaking with a polo mallet before they do any more damage. You'll be next after the blogspot.com guys.


Or, in other words, use the right approach for your use case.

The article is missing one optimization in the js case: the initial xhr can be inlined as static json data in the original html page. Ofcourse, if you have a static original html page then it can be cached on a cdn or in appcache, so really the perf story is not clear cut.


If you can inline static JSON you're a small amount of effort away from using server-templating to serve HTML. Do that.


Hardly. Your backend needs to support both your templating language and whatever you're using to populate those templates. Which means either adding support for JavaScript to your backend or creating and maintaining redundant code.

Meanwhile your backend could be Python, Ruby, whatever and can easily inline the initial chunk of JSON without needing support for any of the frontend technologies doing the actual template rendering (such as Handlebars+Backbone for example).

So far as still needing to wait for JavaScript to download it becomes a matter of how the ROI works out for one's particular use case. Let's say simply inlining the JSON for the initial content narrows that 'time to initial content' gap to a few hundred milliseconds. Is removing that gap still a worthy return on your investment?


Progressive enhancement is good practice, period.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: