Hacker News new | past | comments | ask | show | jobs | submit login
Progressive Enhancement Is Dead (tomdale.net)
334 points by avolcano on Sept 2, 2013 | hide | past | favorite | 259 comments

"And most importantly: Don’t be ashamed to build 100% JavaScript applications. You may get some incensed priests vituperating you in their blogs. But there will be an army of users (like me) who will fall in love with using your app."

This statement needs a huge, HUGE caveat that you should only be building 100% JavaScript apps in situations where doing so makes sense. For example, I find the new Blogger "web app" infuriating. I shouldn't have to stare at a loading screen to read a half-dozen paragraphs of text, that's just stupid. Just serve the HTML. No one is going to "fall in love" with your app if your app shouldn't exist in the first place because the old-school solution provided a superior experience.

Exactly; those of us "vituperating priests" who rail against JavaScript abuse are mostly complaining about things like plain text (or even text plus photos) requiring JavaScript. You want to create some whiz bang new thing that needs Turing completeness? Go to town with JavaScript. You want to require enabling a possible malware vector and chew through computer resources like they're candy just to read a few paragraphs of text? Forget you, I'll keep browsing. And so will crawlers, so you'll never get linked. Attention is scarce, information is not - do the math.

I've been building 100% JS apps for 4 years now. With proper fragment loading, rendering optimization, and CDN usage, the apps appear to load just fast as an asp/php/jsp page (i.e. 200ms). Once the base page is loaded, page loads are instant (i.e. 10ms) smoking a non JS web app.

Blogger is sadly a dog and has been for the longest time. Profiling it just now, some of the blocking JS dependencies take > 800ms to load. Frankly there are many things wrong with it, but it should not serve as an example of a well designed JS web app.

On slow connections I can sometimes only get the base html to load. No images, no external CSS or js. Extra requests for content are just asking for trouble. Worse, when I'm halfway around the world my ping is atrocious. It doesn't matter how quickly you generate your content for me, it's going to take twice as long for me to get it if you don't send it the first time I ask for it.

If I click on a link on your page and you have a faster way to get me there than a full page refresh then great, let's do that. On the first page load, though, there is no good excuse to make me request the site template and the content sequentially and smoosh them together myself. Do that on your end.

In that case, you'd likely prefer a JS based app that used its app manifest to cache the actual moving parts (JS/images) before you headed on vacation and landed on that slow connection. Now it only makes a bare minimum of compressed RPC's to provide the page with the content to render itself. Depending on the content being rendered, you could even get an RPC down to one or two packets, much better than sending you the entire site frame over and over for a minor change (I'm looking at you classic postback driven ASP app).

> In that case, you'd likely prefer a JS based app that used its app manifest

Speaking as somebody who's been in that case, no.

1. The app manifest doesn't need "JS-based apps", they're indepedendents

2. Assets caching doesn't need the app manifest either

3. Stop breaking stuff which works. HTML works, your shitty application does not.

I hope the 10ms quote is for a localhost connection, otherwise I'll read it like I do huge fish weight on my father's last fishing trip.

Since you can have a local cache of objects of interest to the user, many subsequent user interactions don't require an RPC. For example, you loaded this month in the calendar, but once loaded did an extra RPC to fetch month N-1 and N+1's content. Now when the user visits month N+1, you can just render the calendar with that content without any requests or apparent delay to the user. Further, if the user clicked on a calendar day, you could show the day details with no additional RPC.

I quote 10ms as client side code to produce the new HTML and browser painting events can take time, but if you used a canvas or similar you could cut down even further on that render time.

No, the 10ms is because you don't have to hit the server on every page load.

But presumably you'd need to hit the server for new content. The RTT on that alone would take more than 10 ms.

Entirely an aside, and not meant as criticism but merely an interesting (hopefully) point noted in passing:

If you are doing something that requires Turing-completeness, you're doing something that can't be done on real computers, because any collection of digital hardware can be represented as a (flippin' ginormous) finite state machine.

Turing machines are not infinite. There are merely big enough for everything you want to do. (I.e. actual infinity vs potential infinity.)

Turing defined the machines with reference to an infinite tape. Obviously any given machine (that halts) isn't going to use all of it. If you give any specific hard bound, you have a system which can be implemented on a sufficiently large FSM. There is, therefore, a sense in which you can never "need" a Turing complete system to solve any given problem that you can solve with finite resources.

To hold to your definition of Turing completeness would make the term effectively pointless. If you relax the definition only so slightly, suddenly it has meaning again to say that something can be Turing complete.

No, to hold to my definition of Turing completeness would make the term an important theoretical construct, with a lot of ramifications. It would make the statement "X needs Turing completeness" meaningless in a particular, strict sense. I understood what the parent was talking about, speaking loosely, and I originally stated that my post was not meant as criticism.

No hard bound, but the tape is only potential infinite. (Compare https://en.wikipedia.org/wiki/Actual_infinity)

I don't see that that actually has any bearing on what I said, though. Reality will always give some hard bound, on any practical problem.

Not if you are prepared to buy more RAM (or disk space for swapping) as the program is running.

There's only so much RAM in the universe.

And, more importantly, there's only so much RAM one could reasonably acquire. When someone says something in the browser requires Turing completeness, they don't mean they expect users to acquire arbitrary amounts of RAM even at amounts far, far below any theoretical limits.

I feel like this point is sort of implied by the fact that he said just "Don't be ashamed to build 100% JS apps" and not "Build 100% JS apps for everything". But it's still good to spell this out explicitly.

It's more than implied by the OP; he did, in fact spell this out explicitly: "Of course, there will always be cases where server-rendered HTML will be more appropriate."

i m still opening a dozen tabs and reading them whenever i want, unfortunately most newer js only web blogs such as inc mobile, end up having a lot of errors and white pages..

i understand your newly found js excitement with your full ember.js app, but do'nt forget the lessons from java the browser plugin and flash.

1. ok you re real time, but don't assume i m watching your site like a tv

2. dont consume my precious cpu power.. otherwise app throttling will come to browser tabs..

3. dont end up in lots of clientside errors due to various problems, including local storage.. twitter mobile client is an abysmal example where tweets are lost on screen and inc mobile is another with empty ehite pages

JavaScript architecture skills evolved much more slowly than traditional back-end web development and of course it's more difficult to monitor errors that happen on the client side, so yes, it's still a reality that there is a correlation between heavy javascript and broken pages.

However I don't think citing the problems with Flash and Java Applets is applicable. JavaScript maturity is explicitly the long-awaited remedy to the problems of those bolted-on runtimes. It's true that bloated over-the-top CPU-hogging monstrosities will be done with JS as with prior technologies, but there's no technology that is immune to that.

FWIW, Ember is pretty performance-oriented and lets you get a lot done with a very low code / performance / byte overhead.

Javascript is no cure-all. Just like flash and java, avoiding the subjugation of built-in browser features (back button, bookmarking, hyperlinks, context menus, copy and paste) requires explicit effort many developers simply don't give either out of laziness or ignorance.

Being able to CURL any page on your site and parse it using standardized, agreed upon tag names is a feature.

> Being able to CURL any page on your site and parse it using standardized, agreed upon tag names is a feature.

But the vast majority of my users don't even know what cURL is, so why should I worry about this? And in any case, I believe that if I wanted to extract content programmatically, consuming the same API that the Javascript uses to fetch content would be much easier than extracting it from HTML.

The vast majority of users don't know what a bookmark, a URL or a back button is. This isn't any kind of argument. We don't do the right thing because people know about it. We do it because it doesn't secretly break so many of the conveniences we're used to. We. Us. Power users. Hackers.

Tell me, why should you be lazy just because you think the vast majority of your audience are fools and won't notice?

-sigh- why would you take that away from my comment? The point is that javascript makes it possible to build a performant and smoothly integrated web app where with Java and Flash by their plugin nature it is impossible.

I think some browsers already slow down the render loop in non-visible tabs, but the much nicer solution (if people use it) is the Page Visibility API.

I think that issue, while valid, is entirely different from what OP is talking about. That is the question of client to server side rendering tradeoff, and it looks something like this in my view:

- If you have a website that people will come to then spend a bit of time in different views (like gmail, facebook, or an analytics dashboard), it will usually pay off to dock the initial load time a bit in exchange for much faster loads of subsequent views. This is the base tradeoff made by switching to client-side rendering.

- If you have a website where most people will come check out one page then bounce (like a news site, a blog, a page with directions or info, anything where an article will be linked to rather than something that's used like an app), you won't want to make that same trade, since the extra blow to load time initially is going to hit most people and they won't see the benefits of the reduced load time for slowly adds up as you hit more views on the page, because they will read the info then leave.

Any page can be built as a single page app, and many people will do this automatically since it's the new hotness. But the question we should be asking ourselves is whether what you are building actually will be used as an app or not. I'll close out with an interesting example:

If you are building a news site, you should not build as a single page app because it's likely that you'll have a lot of single page views and a high bounce rate. Those single pages should be rendered as quickly as humanly possible. However, if you are building something like a feed reader, the situation would be the opposite. People will likely spend a bit of time in your interface reading a variety of articles, so the additional load time at the beginning will quickly pay itself off in faster renders for each item they read.

EDIT: An interesting approach for a news site would be to render single article views straight from the server or have them compiled, but render their homepage as a single page app. Tailor different parts of your website to the way that they are viewed. Has anyone written about this? Maybe I should write about it...

Template rendering is rarely a bottleneck. When JS-only pages seem slow, it is often because they are loading large JS libraries or are waiting for after page load to begin fetching AJAX data. By prerendering the initial JSON data for the page in script tags, and keeping javascript to the minimum necessary to render the page, pure javascript can render very quickly.

It's no secret that you need to minify your scripts and pass them along in a single request and put them at the bottom where they won't be blocking the UI to render. You should also link to CDN's to avoid that costly lookup or wait for the HTTP request hit (which is just so much larger when you're talking about it coming from the server (hopefully its light JSON and not a beast riddled with server-side templates.

Bustle is an example used by Tom Dale, and their js app is what? Not even 150kb? your headers alone are probably slowing you much more than this.

Locking the UI is bad, but it's not all JS apps that lock up the page as the browser puts it all together. You just need to know how the browser reads your styles and scripts and put them in the right place.

I think your edit is spot on. Cache a single page render in the event that someone is hitting it from search or a direct link and if they hit the home page, give them the dynamic single page app treatment. Sounds like a happy medium with a little overhead. (JS apps who want to be indexed would want the static pages for crawlers anyway.)

Not to mention stuff like accessibility that ends up being an afterthought for a lot of people.

Accessibility is a red herring. JavaScript apps work just fine with screen readers, especially if you support WAI-ARIA.


No, I totally agree that accessibility is possible if your app is JavaScript based. But if you're unaware of or you forget to add in accessibility features, you run the risk of failing much less gracefully for people who need those features than if there were more static content.

(This is, of course, a generalization; specific use cases may vary.)

I think this is a valid generalization. On a page that doesn't use JavaScript, some links, form fields, and/or buttons may be unlabeled, but a blind user at least knows they're there and might be able to look at some hints to their functionality, such as URLs or element IDs. Moreover, the controls and interactions are standard. What happens if the developer attaches some functionality to a device-dependent event, such as onclick, on an undistinguished element such as a div? The element might be styled such that a sighted user knows it's intended to be clicked, but a blind user doesn't get this same information unless the element has an appropriate WAI-ARIA role. And how many web application developers are going to pay attention to ARIA as they craft their unique JavaScript-powered user experiences? Maybe I'm just cynical, but I'm guessing that outside of companies that have a strong incentive to get accessibility right so they can sell to governments and schools, the answer is "not many".

Today's reasonably-built JavaScript-based applications are built using HTML as templates. They don't use a <div> as a button, they use a <button> as a button. They don't invent navigation using JavaScript, they use <a> tags and a JavaScript-based router.

Take a look at discuss.emberjs.com, which uses Discourse (which is powered by Ember... INCEPTION):

On the front page, every navigation, including the links on top and the links to individual forum posts, are <a> tags with an href. When you navigate to a forum post, the buttons to reply, flag, favorite, etc. are all <button> elements.

You definitely want to be (1) using a framework that encourages the use of HTML for the view layer, and (2) using a framework that makes URL-based navigation easy. There's nothing about JavaScript-based applications that make semantic markup and URLs impossible, and today's screen readers don't care whether your HTML was rendered using JavaScript or from the server.

I think you're jaded because the previous generation of frameworks (SproutCore, Capuccino, Ext.js) shunned HTML in favor of all-JavaScript UI toolkits, but that framework style has fallen out of favor. HTML is here to stay.

You're right to point out that the current best practice is to use HTML as intended, including semantic markup for links and buttons. Still, some developers attach mouse and/or touch events to semantically undistinguished elements without providing appropriate markup. For instance, the TodoMVC examples do this (including the Ember.js one). I guess it's time for me to start submitting pull requests.

It looks like the Ember one has three {{action}}s:

1) <button {{action "removeTodo"}} class="destroy"> (a button) 2) <button id="clear-completed" {{action "clearCompleted"}} {{bind-attr class="buttonClass:hidden"}}> (a button) 3) <label {{action "editTodo" on="doubleClick"}}>{{title}}</label> - not a button, but a requirement of TodoMVC. It would be nice if TodoMVC had a <button> marked edit that achieved the same goal.

There is also a subclass of <input>, which:

1) Automatically focuses on insertion (native focuses are properly handled by screen readers) 2) Clears out the Todo if the text becomes empty and the user hits return (somewhat screen-reader friendly, and a requirement of TodoMVC)

What am I missing?

Using <button> as a button? Enjoy your form submission when it's inside a form element. HTML is the most frustrating thing in the universe.

<button type="button"> does not do anything, even inside a <form>.

Ah, thanks, HTML is very un-intuitive, you have to know the behavior of each element

Absolutely, but the context "afterthought" narrows the situation to a very specific case, that in which you have given no consideration at all about accessibility.

My belief is that if you were able to produce good markup, it would have been accessible from the very beginning, because accessible markup tends to be well-formed and semantically correct.

This would probably rule knowledge of WAI-ARIA out from everyone who fit parent's case.

From the article:

Of course, there will always be cases where server-rendered HTML will be more appropriate. But that’s for you to decide by analyzing what percentage of your users have JavaScript disabled and what kind of user experience you want to deliver.

Seems like a pretty big caveat to me.

You know, check out bustle.com -- I used to agree with you, but that's really fast for both the first load and subsequent loads.

Idn, I'm not impressed with the initial load time. If that site was served as a static page that was recompiled when a new article was added, it would be significantly faster. But at the same time, subsequent loads would probably be a little slower. Which one is more worth it?

You can do both! Prerender the front page server side, then switch to super-fast PJAX once the JavaScript is loaded. This nifty technique is called... wait for it... "progressive enhancement".

Airbnb's recently released Rendr library (https://github.com/airbnb/rendr) is a great attempt to make this easy for complex web apps to do.

Oh wow! I get to write two code paths! I'm so excited to do double the work for a small subset of users!

Oh, and I get to test two code paths, and all of the wonderful ways they might interact!

The whole point of Rendr is you don't have to write two code paths. That's why I linked to it.


Performance is paramount. On many smartphone browser tabbing out while I wait for your slow-assed page to load and render isn't an option like it would be on the desktop. I feel the pain of every lazy-loaded image, every sluggish layout script, etc. Many "responsive" pages are actually the worst culprits.

Yes, I need a new phone. But older smartphone browsers aren't exactly a rare use-case.

Also, spare a thought for those who have to use screen readers.

If your 'app' is just a web page — use a web page!

Sometimes I wonder whether advocates for heavy client-side JavaScript ever bother to test on mobile devices, because it very much seems that in the majority of cases they don't. The vast majority of JavaScript-based "show HN" submissions I've seen don't work on mobile browsers well or at all, to the point where I've been trained not to click them. This submission from a few days ago is an example, and last I checked it caused browser crashes and, even if you managed to load it, was unusable with an iPad: https://news.ycombinator.com/item?id=6270451

Edit: Here is another example from the past few days that's unusable on mobile: https://news.ycombinator.com/item?id=6302825

This is a common problem with media sites. I dread trying to load quartz, usa today, gawker, etc since every thick-client media site is intermittently a bad citizen on mobile browsers (for example, gawker in Chrome on iOS currently perpetually reloads the page). Even when these kinds of sites work, there's often a multi-second delay where where the user has to sit and watch elements fly around as the page is built.

Edit again: just to be clear, if you are a non-technical product manager or CEO type, my comment should not be interpreted in any way as an implication that the web or JavaScript is somehow inherently bad and therefore your developers must build a "native app." My comment's intended audience is developers with a deep, working understanding of these technologies.

I think you just invented an internet forum with a proof-of-technical-proficiency requirement for each post. Well done, sir.

>since every thick-client media site is intermittently a bad citizen on mobile browsers

Not just mobile browsers. They are horrible everywhere that isn't chrome or firefox.

I think the point that's overlooked here is that the offenders aren't the clever apps that would be impossible to write without javascript. Go ahead and write those. I'm happy for you, really.

The problem is pages that require javascript to display static content. There are very few good reasons for an article, or an image gallery, or a homepage that could have been displayed just fine a decade ago to now need javascript so it can do some stupid flashy thing that breaks the expected interface behaviour. And frankly, that's most of what I'm seeing in "Sigh, Javascript."

What, exactly, is the issue with writing a simple content site that requires JS? If you aren't pulling in half a MB of JS, but it does require scripting to function, I don't see the problem. Just because I could spend the effort to be progressive doesn't constrain me to have to do it. It doesn't make me a lazy programmer either: just pragmatic.

99.99% of the people visiting my site have JS enabled and I'm happy to support them just fine without worrying about the really long tail.

Search-engine optimization is a large part.

The fact that the JS frequently results in lower usability is biggest on my list. I've encountered Blogger themes which are so fucking piss-poor I literally cannot read them, even with JS enabled.

If the primary goal of your site is content, stick with vanilla HTML. You're vastly better off for it.

> Search-engine optimization is a large part.

It doesn't look like it is too difficult to add search indexing support.


"Not too difficult" and "is actually done by your typical technically naive person" is a few worlds apart.

Javascript-dependent content _can_ present problems for search engine accessibility

There have been techniques for dealing with this for a very long time: https://developers.google.com/webmasters/ajax-crawling/docs/...

And this is only an issue for pages where you actually want to expose your content to search engines.

Sigh, Google-only websites.

_escaped_fragment_ is a stupid hack that requires hardest bit of work needed for progressive enhancement, but provides none of the benefit.

_escaped_fragment_ is a bit of a hack, but it also provide enormous benefit. It allows you to gain all the benefits this article is talking about (which I won't repeat here) and still make you site crawlable without duplicating code.

You are right about one thing, it is a pain to setup. Much harder than it sounds at first. After doing it a few times, I finally got sick of it and built http://www.BromBone.com. It's a service that takes generates the html snapshots for site owners and keeps them up to date. I hope it will let site owner keep all the positives and eliminate most of the new negatives.

Not only Google! Bing (and so Yahoo), Yandex and Facebook (only #! for this last) support this protocol too.

And if you use html5 pushstate you could even serve the HTML captures to any bots. Eventually you also could easily add graceful degradation to your site serving snapshots for non js users. As of PoC for SEO4Ajax (http://www.seo4ajax.com), I built this application (http://www.appscharts.me) illustrating this purpose.

Google has been crawling AJAX'd websites bypassing the _escaped_fragment_ mechanism for at least a year now.

Has it really? Do you have an example?

I've never seen an example where Google crawled a page and included content that was loaded via ajax or an external javascript file. I have seen Google index content that is loaded from a script tag included in the html page.

Certainly cool that Google is working on this. I suspect someday it will all work perfectly.

However, from working with customers of BromBone, I know this. People have javascript powered webpages that were not getting indexed by Google. Google would crawl the page, and see no content. The pages were not in Google. After creating prerendered snapshots and serving them when the _escaped_fragment_ parameter was present, the pages showed up in Google.

It's not like other search engines can't step up their game and build the same functionality.

The problem is that now everyone who wants to build a decent crawler/scraper has to do it as well. Used to be any kid in his parents' basement could put wget and sed together and explore the internet. Now Timmy has to bolt a whole damn browser runtime onto his script to make sure he's getting the content text from blog posts and twitter feeds.

Asking Timmy to "step up his game" is heartless, and it'd be unnecessary if we were better citizens.

Maybe Timmy should just have a look at the Google specification! And he would see that he could continue to crawl the web with his favorite tool! He has just to replace #! with an _escaped_fragment_ query parameter, that's not so heartless ;)

I have to assume, in Google’s case anyway, that they are indexing according to ‘what the user sees’. Not to do so would means lower-quality search results. Users don’t care how the page ‘happens’.

I make this assumption because most of us don’t actually know how Google works, acting as a user’s proxy is in Google’s interest, and it is certainly within their technological capability. (Let me emphasize assume, again.)

This would require that the crawler actually renders sites, indexes the resulting DOM, and clicks what can be clicked. A headless Chrome, perhaps.

From Google's technical guidlines:

"Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site."


Not sure what you are trying to say by merely copying and pasting text about random search engine spiders that may have trouble crawling you. Maybe you could explain?

With regard to Google, however, they are explicit in their support for JavaScript rendered sites:


Amongst many other links.

"99.99% of the people visiting my site have JS enabled"

Do you know that for a FACT, or are you just guessing?

His javascript based analytics software has all the numbers.

To add to this, generally most user data collected is through javascript.

If you don't have javascript enabled...

Yup, that's always in the back of my mind when people talk about javascript-enabled user stats. Of course, it's perfectly possible that the OP IS recording accurate(-ish) JS usage stats, it's just more involved than other client stats that are readily provided to the server, so less likely. If 99.99% wasn't just a figure plucked out of the air, it sounds believable that it's being obtained 'correctly', since it's not 100% :)

...because if it's say 98%, you have just thrown away 2% revenue.

Hardly, you have to consider the additional cost of attempting to support that additional 2%; it could very well cost more than it's worth in development time and long term complexity. Of course that's an app specific decision. Every market you specialize for has a cost, short term and long term, so it has to be worth it financially; businesses aren't in the habit of just being nice for no reason.

"Hardly, you have to consider the additional cost of attempting to support that additional 2%"

That's certainly not how I calculate revenue...

Revenue isn't the right metric; profit is. My point is that chasing revenue isn't what good businesses do, they chase profit. I don't care about 2% additional revenue if doing so reduces profit due to the extra development and maintenance overhead and neither should any sane business person.

Obviously. This would have been a solid response to dreamfactory's comment.

Another way to put it is that vanilla HTML will support the full market so you have to be confident that moving away from it will cover that margin in revenue or savings just to break even. On a large commerce site for example, even a small percentage of revenue could be a staggering amount.

That's the opposite way of putting it. The way of extra work to support edge case users that often aren't worth the effort. Now maybe it's worth it for a big enough site, but for the average site I doubt it ever is. I think it perfectly reasonable to ignore users who disable JavaScript for most websites.

So do I tbh but nobody ever makes a proper business case in practise. If you are going to say that HTML is no longer the default protocol for serving content to browsers (and that it is therefore extra work to support rather than the other way round), it seems like quite a big step. I guess we can rationalise the tattered corpses of Java widgets, Flash, ActiveX, Silverlight by saying they had proprietary roots and no universal support but feels like there's a lot of handwavey stuff around client support, RESTfulness, SEO.

Of course HTML is the default protocol, but not vanilla HTML; many sites require JavaScript as well; the default is HTML + JavaScript not just HTML. Supporting non js enhanced HTML is extra work.

It isn't extra work or an edge case. It is doing the baseline correct thing. I can't count the number of fad obsessed "web developers" who ask us why we serve html instead of javascript crap, while they take 3 times as long to produce a broken version of the same thing.

> It is doing the baseline correct thing.

You don't get to define what the baseline correct thing is for a business, profit does that. Progressive enhancement is more work than simply presuming JavaScript is always available; that's just a simple undeniable fact.

>You don't get to define what the baseline correct thing is for a business, profit does that

That's what I am using.

>Progressive enhancement is more work than simply presuming JavaScript is always available; that's just a simple undeniable fact.

I am denying it right now, that is the whole point. Server side templates with simple javascript enhancement has been much faster for us, and much easier to maintain.

> I am denying it right now, that is the whole point. Server side templates with simple javascript enhancement has been much faster for us, and much easier to maintain.

I simply don't believe you. It's a fact that it's more work to support both AJAX interactions and work with js disabled than it is to support just AJAX alone. There is no arguing this, it's more work to support 2 modes of interaction than it is to support only 1. You may find it a comfortable workflow for you, to start with vanilla HTML and then progressively enhance it, but regardless of how much you like the style, it is more effort and does have more long term maintenance costs to forever support 2 modes of interaction. No one who isn't completely full of shit would argue that progressive enhancement is less work.

HN is not the venue for that kind of garbage. Your personal experiences and biases are not universal objective truths. Grow up.

I require no lecture about HN from you, I've been here since the beginning. I listed facts not personal experiences and I'm sorry you can't handle that and resort to lying.

> It's a fact that it's more work to support both AJAX interactions and work with js disabled than it is to support just AJAX alone

That is not a fact. It may well be your experience from the way your applications were written, but it is not a fact. You clearly do need a lecture, or you wouldn't be posting garbage.

Whatever child; goodbye.

The "issue" is that you can just put the simple content in a noscript tag and then your simple content site does not require JS. The content cannot be simple if JS is required.

Because most of those solutions have their own half-assed impls of * Loading indicators * Navigation through history * Error display * Scrolling I could go on.

The top reason to require Javascript is to enable user tracking (e.g. Google Analytics, but not just that at all).

Check out meteor.js for why JavaScript would be needed for "static only content". Real time updates. That's why. Everything is an app.

Which is silly. If I'm just trying to view a news article from yesterday or a year ago, I shouldn't have to worry about what some silly script is trying to do.

If the core of the content is or should be text, then there's no need for countless silly layers around it.

Maybe you want a PDF instead.

Or maybe some sort of "hypertext" document. That could contain links to other such documents, allowing users to easily navigate content. All this interlinked content could form some sort of "web" even.

When your only tool is a hammer, everything looks like a nail. There are valid use cases that meteor.js is awesome for, but one should avoid cargo culting.

Maybe I'm getting old but sometimes I really really don't want real time updates.

You want to keep hitting refresh, refresh, refresh?

He wants the page not to change under him when he hasn't hit refresh. Personally, race conditions between user and UI drive me crazy.

Is this something people even want in content driven sites though? Imagine reddit in real time, it would be a mess and impossible to keep track of what you last read.

Why should users have to click refresh for pages to load more comments or to see when someone replies to you? The fact that reddit does not do this is not evidence that it would be confusing.

Because sites that distract me from what I'm reading to tell me that there are new comments are extremely annoying.

Sure, you could add, say, auto-loading for new comments when you reach the bottom of the page, or setting the message icon to orange when you get a reply, but that hardly requires a full-blown web app, it's a simple addon to a static site.

disqus deals with this issue quite well , disqus will just notify you there are new comments ,it's up to you to load them or not.

You can try out for yourself :)


I have, and it works ok when there is a ton of concurrent users. But on the other hand, I don't think the UX of telescope is any better than reddit's and in some ways worse (example: clicking on a post with a large amount of comments cause the page to pause without any visual feedback. This could be fixed, but is any better than just a regular link?)

So, for static content that changes in real time.

No reason why you can't provide real time updates on your content driven site. and respect progressive enhancement. I'd submit this would be the easiest way to do it.

> Don’t be ashamed to build 100% JavaScript applications. You may get some incensed priests vituperating you in their blogs. But there will be an army of users (like me) who will fall in love with using your app.

We all want wonderful experiences as users. The crux is almost a question of "how we want things to be" and "how we want to get there".

For me, the 100% JS MV movement is wonderful for a specific genre of app: An app that is:

* Behind an intranet

* Behind a paywall

* Behind a login-wall

* Prototypes / Demos / PoCs / etc.

But for the open web -- wikipedia, blogs, discussion forums, journalism (etc.) this movement detracts from the web as a whole, in that it excuses developers from having to worry about degraded and/or non-human consumption of their websites' data.

We have to ask ourselves what we, as humanity, want from the web. Do we really want a web of 100% bespoke JavaScript MV web-apps with no publicly consumable APIs nor semantic representations? If that is the intent and desire of the developers, designers and otherwise educated-concerned web-goers, then fine, let's do that and hope it works out okay...

But there is an alternative that has its roots already planted deep in the web -- the idea and virtue of a web where you can:

* Request an HTTP resource and get back a meaningful and semantically enriched representation

* Access and mash-up each-others' data, so as to better further understanding & enlightenment

* Equally access data and insight via any medium, the latest Chrome or the oldest Nokia

So, please, go ahead and create a 100% JS front-end but, if you are creating something for the open web, consider exposing alternative representations for degraded/non-human consumption. It doesn't have to be progressively enhanced.

Imagine for a moment if Wikipedia was one massive Ember App... And no, Wikipedia is not an exception from the norm -- it is the embodiment of the open web.

Every Ember.js app, by definition, connects to an API that does all of the things you are asking for.

Seriously, open up any Ember.js app and look at the network traffic. You'll see a series of requests, usually using very RESTful URLs, that requests the document.

The only difference is that, instead of HTML, where you are conflating the markup and the content, you get a nice, easily-consumable version of the document in JSON form.

There is literally no change to the web here, other than the UIs are faster and better, and it's easier for me to consume JSON documents than trying to scrape HTML.

That could work -- if the JSON is intended for public consumption, and if it is documented as so. The problem, I'd argue, with JSON is that it does not intentionally facilitate semantic annotations, unlike HTML(5). I'd argue that a properly marked-up HTML5 representation of a piece of data is more useful than a bespoke JSON structure with crude naming liable to change without notice. The benefit I get with an HTML representation is that it's the exact thing that was intended for the user to read/consume, whereas JSON is awkward to divine meaning from without the crucial app-specific view-logic that turns it into DOM.

How would you reconcile the need for an open-semantic-web with arbitrary JSON structures with no governing semantic standard?

EDIT: An example of a potential problem: Please take a look at how the Bustle app you referenced brings its article content to the front-end:

E.g. http://www.bustle.com/articles/4470-why-we-should-root-for-l...

View source. It's not a public REST API (not visibly so); it's awkwardly embedded as literal JS in the HTML document itself... That'd be hell to publicly consume through any kind of automation.

You are building up a strawman against JSON without acknowledging that every problem you outline applies just as much, if not more, to HTML.

Is the HTML of any popular website publicly documented? Is there any guarantee that an XPath to a particular value won't change? Is there any guarantee the data I need is marked up with semantically accurate class names? No.

HTML is intended for public consumption—by a human, at a particular time. It is not a data interchange format.

Contrast that with things like Twitter or GitHub, which provide a versioned JSON API that is guaranteed not to change. Your web site becomes just another consumer of that API.

JSON contains all of the data you need, but in a way designed to be consumed by computers, and you don't have to do all of that awful HTML scraping.

And as for Bustle not having a public JSON API, well, here you go:

curl -H "Accept: application/json" http://www.bustle.com/api/v1/sections/home.json

A versioned JSON API that is guaranteed not to change. Can any public site on the internet guarantee that about its HTML?

A versioned JSON API is awesome, I am not denying that. I also don't deny that the current state of the HTML markup on most sites is semantically rubbish.

Regardless of this entire PE debate, we would still have a problem, on the web, of data being out of reach due to walled apps that only serve rubbish HTML.

The problem of open + semantic data is very relevant to this discussion but we're pretending that one "side" has all the answers. I want a better web -- more open -- more semantic -- and maybe some shimmer of a truly semantic web[1] will emerge in the next 20 years.

So, yes, a 100% JS App is 100% awesome if, IMHO, it has:

* A publicly documented and consumable REST API

* Semantically enriched data through that API

* Some kind of degraded state NOT just for search-engines but for older devices and restrictive access (e.g. behind national/corporate firewalls)

I am not interested in being one side or another regarding this PE feud, and I am sure you're not either. I am trying to question what is best for the web and humanity as a whole. I don't think we have a silver-bullet answer. I do think it's necessary to dichotomize walled web-apps and open websites, and the latter deserve additional thought regarding usability, accessibility and semantics.

[1] http://en.wikipedia.org/wiki/Semantic_Web

and in fact that particular bustle link you posted is a perfect example of where using HTML5 + microdata would be not only faster to render and crawlable but also allow the underlying data structure to be consumed by javascript. There's no reason why

couldn't have been extracted from

    <article itemscope itemtype="/article">
        <h1 itemprop="title">Why We Should Root for Lamar Odom</h1>

It's a real bitch to implement when your use case is complicated and nested. From experience, I have no idea how to mark up a document "correctly" because of like recursive definitions of some microdata. Like flingy can contain thingy. A thingy can contain flingy.

Should I mark my concrete item as flingy where some elements are thingies or should that be a thingy with some subelements as flingies?

I just did my best and called it a day. Then spend a lot of time debugging it in the microdata analyzer tool.

you can mark sub-scopes with their own itemscope property, so you could have this:

    <article itemscope itemtype="/article">
        <h1 itemprop="title">My Title</h1>

        <div class="related-thing" itemscope itemtype="/author" itemprop="author">
                <span itemprop="firstName">John</span>
                <span itemprop="lastName">Smith</span>
        <aside class="incidental-unrelated-thing" itemscope itemtype="/sausage">
            <span itemprop="type">Cumberland</span>
            <span itemprop="ingredients">Mystery Meat</span>

in the example above, we define an article that has a related author property, that author property is its own scope so has firstName and lastName properties of its own. We also define an unrelated itemscope (unrelated because it has no itemprop) that happens to be nested in the same element, so this would parse to:

           "type": "/article",
           "properties": {
               "title": ["My Title"],
               "author": [{
                   "type": "/author",
                   "properties": {
                       "firstName": ["John"],
                       "lastName": ["Smith"]

            "type": "/sausage",
            "properties": {
                "type": ["Cumberland"],
                "ingredients": ["Mystery Meat"]

Implement Readability in such a way that it can find and read any JSON api.

HTML is not perfect, but at least you can usually pick out which bit is the title and which bit is the article content through some heuristics.

"And as for Bustle not having a public JSON API, well, here you go: curl -H "Accept: application/json" http://www.bustle.com/api/v1/sections/home.json A versioned JSON API that is guaranteed not to change. Can any public site on the internet guarantee that about its HTML?"

That's an interesting definition of "public JSON API" you are using.

* Where is the public documentation for this endpoint?

* Given an ember app, how do you arrive at that URL? (Previously you mentioned watching network requests from a browser to discover the URL endpoint, I think that's particularly unsuitable way of discovering a public JSON API)

* Where's the schema definition of that JSON blob?

That's as much a public JSON API as "curl http://en.wikipedia.org/wiki/Louis_Boullogne" is a public HTML API. And at least with the wikipedia one, there is some defined structure and implied meaning to the data coming back (i.e. standardised HTML elements).

"A versioned JSON API that is guaranteed not to change."

How is that guarantee enforced? What prevents a developer from changing the nature or structure of the response at that URL?

Strongly agree. The day that merely reading Wikipedia requires JavaScript will be the day that the open Web is dead.

There are dinosaurs like me who use the web mostly for reading stuff on websites. I also happen to use an old, quite slow computer as my default machine.

It aggrevates me when a site that I try to open because of its textual content takes 30 seconds to render since there's too much Javascript going on. Then I'm typically sitting there thinking: "how hard can it be to display a piece of text?" Because of this, when I see my CPU spike as I try to open a website in a new tab, I very often decide to simply close the tab again and do without the information I originally came there to see. This happens a lot for me with online magazines, such as wired or techcrunch. One trick is to invoke the "Readability" bookmarklet if I can get to it fast enough, i.e., before the JavaScript has frozen my browser completely.

Of course I understand that I am part of a tiny minority. And probably I'm not part of your target group anyway. And the web is so much more in 2013 than pages with text on them.

If you do, however, want someone like me to come to your site, you better remember to keep it dinosaur-friendly.

You're in a demographic that clearly chooses not to spend money or doesn't have any money to spend, which is not attractive to people trying to make money on the internet. I'm guessing government sites would be the only sites that are interested in serving you.

Just because I don't want to spend money on a new phone or a new computer doesn't mean that I don't want to spend money on a new car.

In all honesty, why should I care about you, the tiny minority? Why should I waste any time at all worrying about you?

There are two different kinds of minority to consider:

1. People who are part of a minority by choice. For instance, they choose to use an old computer, old OS, old web browser, etc. even though it is within their means to upgrade. Or they have current software but intentionally restrict it with add-ons like NoScript. You might be justified in not catering to these minorities, because it's just not profitable and you have no moral obligation to do so. Maybe the GP falls in that category; I don't know.

2. People who are in a minority through no choice of their own, and have no power to change their circumstance. One example would be a poor student or job-seeker who's stuck with an old computer and can't do anything about it. People with disabilities also fall in this latter category. I know someone who once lost a job because he is blind and some inaccessible software barred him from doing that job. He described how that felt in this blog post:


All that to say that your comment is quite insensitive. Those are real people in that small minority, and depending on why they're in that minority, we developers might have an obligation to accommodate them.

EDIT: Yes, I know that JavaScript apps can be accessible.

I should've phrased it better, but I was asking him as a person of minority by choice.

I fully understand making a site accessible for those who need it, but I don't understand the NoScript people.

What is there to understand about NoScript people? They just dislike malware, having their accounts stolen and having their personal data up for sale.

Also, if you don't have an unlimited Internet plan, not having to download 5MB of trackers, "analytics" scripts and Flash ads can be a pretty significant advantage.

Social justice is not a factor for most people when designing a frontend architecture. My heart goes out to those running IE6 on Windows Me, but I'm not going to let the minority oppress the majority for my app I'm building.

Well, everybody is in some kind of minority... But that's not the point.

Are you sure about your numbers? How do you know that people with javascript disabled are a minority? Or, more specifically, how do you measure that without requiring javascript?

I'm all for making web applications that need javascript. But let's do it for the right reasons, not because "everybody has javascript installed".

Nobody's saying you should. But designing things in a progressive-enhancement-type way will mean that you have no need to worry about all these "minorities" as most things will generally more or less work.

Besides we're all minorities in one way or another. I'm sure I could find some dimension by which you're a minority and you'd be pretty cheesed off if you weren't catered for for some trivial reason.

Not everything in this world has to cater to me. I'd prefer it if everyone stopped trying to be all things.

You'd be fine if half the web didn't work for you?

How about we say it's fine replacing text with images? We could always use alt tags for accessibility. The number of people who ever actually select/copy text from your website will be <1%, so why not just do away with it and get the text in any font we like, with any rendering style & layout we like?

Because that is an utterly ridiculous strawman. Using images for text is firstly almost entirely pointless, and has many many drawbacks that using JavaScript doesn't. For instance, it's a pain in the arse to maintain, it breaks text reflowing, it's huge, you have to deal with image compression, and it likely ruins the ability of search engines to spider your content without OCR.

Not to mention that any decent screen reader should surely be decent enough to extract text from an already rendered page (I mean christ, we have testing frameworks that can basically do this already).

If building apps using purely JavaScript breaks screen-readers, we need to fix screen readers. If our applications aren't working for people who insist on browsing with lynx or whatever, well, for shame.

Honestly though, another factor of "JavaScript only" things is that it means that some sort API is exposed to the client, so if it's really such a massive problem, just pull the JSON and parse that into something readable.

Not as ridiculous as you think - a large amount of content is locked into PDFs which present effectively as images, and HTML wasn't always as dominant as it is today.

The idea that it is actively shameful to use Lynx is strange and antithetical to the way the web was originally designed - the user agent is SUPPOSED to be in control of presentation, the markup is SUPPOSED to be semantic.

It's not that it's shameful to use Lynx. I love Lynx. However, the web is continuously evolving, and things like Lynx are not keeping up - I mean, does it support WebGL and HTML5 video?

I absolutely think there's a place for a text-based browser still, however, one that is designed with modern considerations.

I would argue it's fairly similar to using javascript to load documents. If you have a web-app, I'm a huge fan of javascript/client-side rendering. If you've got a blog, not so much.

Well of course, tools for the job.

"Because that is an utterly ridiculous strawman."

Of course it is, but it's clearly an admitted strawman.

What's wrong with people these days? You can't rhetorically explore someone's views without everyone jumping in and shouting "strawman" as if it's their new favourite word.

> What's wrong with people these days? You can't rhetorically explore someone's views without everyone jumping in and shouting "strawman" as if it's their new favourite word.

A strawman -- which, as you admit, this was -- is very different from "rhetorically exploring someone's views". Its "rhetorically exploring something distinct from the views of your opponent, and using it to impugn the views of your opponent."

Which, you know, makes people upset.

>A strawman -- which, as you admit, this was -- is very different from "rhetorically exploring someone's views"

No, it isn't. You are making the mistake of conflating a strawman with an argument built on a strawman as a logical fallacy.

> > A strawman [...] is very different from "rhetorically exploring someone's views"

> No, it isn't.

Yes, it is.

> You are making the mistake of conflating a strawman with an argument built on a strawman as a logical fallacy.

No, I'm not. If you are exploring their views rather than yourself constructing something new and distinct from their views, you aren't making a strawman, whether or not you also, implicitly or explicitly, are arguing against their position using whatever you are exploring, which would be the strawman fallacy in the case where you were constructing a strawman.

A strawman can be used to explore their views. That is the point. It is only a fallacy if you create a strawman, then argue against it and claim to have argued against the original point. That has nothing to do with anything that occurred in your conversation.

> A strawman can be used to explore their views.

No, it really can't. Using something meaningfully distinct to "explore" their views is the exact same logical fallacy as using something meaningfully distinct to "argue against" their views. What you are dealing with is something distinctly different than their views, whether you are "arguing against" it or merely "exploring" it.

And, even if it could, it still wouldn't be equivalent to exploring their views, so being called out for using a strawman when using a strawman -- for whatever purpose -- woudl still not be being called out for using a strawman whenever you rhetorically explore someone's views.

What I’ve found, counter-intuitively, is that apps that embrace JavaScript actually end up having less JavaScript. Yeah, I know, it’s some Zen koan shit. But the numbers speak for themselves.

The author does very little to support his claims. The Boston Globe page also has lot of scripts to support advertising and the advertising itself. As well, entirely different engineering teams and probably cultures. There's not even any research into The Boston Globe's use of progressive JS; it makes ZERO sense why the two homepages could not have the same JS footprint, with The Boston Globe continuing to work and Bustle continuing to not work, while JS is disabled.

I'm all for not supporting progressive JS; Bustle is certainly within their right to not work without JS; the author is just caught in a confirmation-bias bubble. His conclusions don't make sense; our intuitions are right; it doesn't take [much] more JS to progressively enhance a site.

> Worrying about browsers without JavaScript is like worrying about whether you’re backwards compatible with HTML 3.2 or CSS2. At some point, you have to accept that some things are just part of the platform.

This is the key bit.

It's a pretty popular attitude on HN to dismiss supporting IE, or IE7, or even IE8 or IE9 -- despite having significant user bases. But there's still a strong vocal contingent which argues for webpages to still work fine with without JavaScript, despite it being a miniscule user base. They both seem to come from philosophical standpoints, rather than anything practical. (Granted, SEO is a valid consideration, but that's fundamentally a different conversation.)

It's not just the people with JavaScript turned off: it's people who, for example, rely on the Readability or Readable bookmarklets or Safari's Reader functionality.

In general turning documents into programs deprives users of those documents of a kind flexibility that they enjoyed when documents were just data.

And does it not bother you that web-browser development has gotten so complicated and labor-intensive that there are exactly five organizations with the resources to maintain a web browser?

What hope do operating systems with very small userbases like Plan 9 have of ever running a web browser capable of displaying correctly the majority of web site?

Browser complexity closes off certain opportunities: e.g., about 15 years ago, a blind programmer named Karl Dahlke wrote a "command-line web browser" called "edbrowse" that has a command language similar to the line editor ed. Is it OK with you that the fraction of web pages browsable with edbrowse keeps going down?

Another way that making the web a richer application-delivery platform reduces the options available to users: approximately nobody bothers to maintain a local copy of the web pages they have browsed (which would be among other things a useful insurance against web pages disappearing from the web) because it is so complicated to do.

And then there is the loss of consistency useful to readers. For example, when you click on a link, then hit the Back button, the browser used to always put you back at the same place in the web page that you were when you clicked the link. Not anymore: for example, if you click a search result on hnsearch.com, then hit Back, you are taken to the start of the web page containing the search results with the result that you have to scroll through results you've already sifted through just to get back to the state of progress you were in when you clicked the link.

A possible reply to that is that the maintainer of hnsearch.com should fix his web site. But the number and variety of "losses" or "regressions" like that one is so large -- and increasing so fast -- as to make me doubt that webmasters will ever get around to fixing most of them, particularly since webmasters on average are less technically knowledgeable than, e.g., programmers are.

Selecting an extent of text in preparation for copying it is another thing that has become less consistent and controllable over time: sometimes when I just want to select a few words, slight movement of the cursor while dragging will cause an entire adjacent column of text to be selected or de-selected according to rules that are essentially unknowable to the reader.

In the past, for about 15 years, the space key consistently meant "scroll down a screenful" (provided the page as a whole has the focus -- as opposed to, e.g., a TEXTAREA in the page). The desire to turn the web into an applications-delivery platform caused the web site to gain the discretion to map the space key to something else, which is a gain for authors of web apps, but a loss for readers who used to be able to depend on consistent behavior from the space key.

In summary, although I am happy that many thousands of applications developers are now able to make good livings without becoming a "tenant" or a "captive" of a platform owned by a single corporation, I am sad about how complicated, tedious and mystifying it has become to use the web to consume static content -- and how expensive (in programmer time and effort) it has become to put static web content to uses not foreseen and provided for by the author of the content.

Amen to Readability.

Increasingly, site design does little but piss me off. I use a set of tools, Readability and Stylebot included (484 styles and counting, several of those applying to multiple sites) to address the more severe annoyances (H/N is one of my restyled sites). What's particularly annoying are content-heavy sites (blogs, online periodicals) which break Readability and/or aren't restylable with Stylebot (I recenty encountered a Blogger template which navigated to a different page when I tried editing CSS in the Stylebot editor).

In the original article, Tom notes:

At some point recently, the browser transformed from being an awesome interactive document viewer into being the world’s most advanced, widely-distributed application runtime.

That's pretty much the conclusion I'd reached, though my preference is that tools which are useful for presenting and managing content would be developed: https://plus.google.com/104092656004159577193/posts/LR7jubsX...

Readability is useful, but addresses only a subset of the features I'd like. I've been collecting a large set of literature through it and using Calibre. In particular I want bibliographic capabilities and indexing, as well as much larger tag lists (I ran into Readability's 500 tags per user limit within 3-4 days).

The other problem with JS is that I'm increasingly running into single Web apps which consume, literally, a gigabyte or more of memory (Google+ is perhaps the worst of these).

Which means: I could run a lightweight desktop application which provides a basic set of functionality ... or I can run a browser with perhaps a handful of tabs open, and absolutely pig out my system.

The browser is a decent rapid-development and rapid-deployment environment, but it's still seriously wanting for real productivity.

This author is making an assumption that progressive enhancement exists only so that people who are browsing without Javascript can have a better experience. Of course, this isn't true. Progressive enhancement is a good thing because it encourages you to be as descriptive as possible at every layer of your technology stack.

Why does it matter in practice? Well, there's more than one reason, but consider that not every user agent is a browser with a person sitting in front of it. Your website also should be interpretable by content indexers like search engines, accessibility devices like screen readers, and so on.

Some services don't fit this model and really are better off being designed like desktop applications written in HTML and JS. But in my experience, most services can be modelled more like websites without making the design any more difficult to reason about, and almost all users' experiences are bettered by it.

First, the accessibility argument is a red herring that I'm getting frustrated people continue to try to throw around. Screen readers support JS, okay? Let's put this argument to bed.


Okay, thank you for letting me get that off my chest.

Second, one nice thing about "embracing 100% JavaScript" that I talk about in the post is that it requires you to implement a really solid JSON API, because your web site is now a true client that consumes an API. This makes it really easy to integrate with third-party services that consume your content. I agree that putting content behind JavaScript sucks; I'm just advocating that the content be JSON (or some other normalized format), not HTML.

"I agree that putting content behind JavaScript sucks; I'm just advocating that the content be JSON (or some other normalized format), not HTML."

Ah, so don't put it "behind JavaScript", but put it in a format that a browser can't natively handle in a sane way. And use a grab-bag general object format instead of one that has built-in semantic definitions that are be useful in a document context, like, idunno, <strong> <em> <p> <a> ?

Also, I've very rarely (if ever) seen something I'd describe as a "solid JSON API".

Can you give an example of one of your sites with a "really solid JSON API"? I'm afraid there's a gap in this definition, as good APIs are rare and good JSON APIs with a single client are virtually non-existant.

Here's Bustle's: http://www.bustle.com/api/v1/sections/home.json

It needs run through a pretty-iffier, of course, but seems straightforward to me.

"Friendly reminder that "people with JS disabled" includes those on high-latency networks, bad firewalls, and browsers you don't support." - @jcoglan


And people using NoScript.

And who cares about people using NoScript, no one should. They are in the vast minority, not worth the trouble accommodating for them.

And they know they're using NoScript. If the website doesn't work, they can decide if they want to enable or not. It's their burden, not mine.

Unless, of course, my client wants a 100% working no-script webapp/site. Then I'll happily charge for the extra time building it.

You may want to use a pseudonym for work where you cut corners and failed to do work you're actually prepared to do properly. I browse without javascript enabled to judge the quality of the tools I'm considering; I wouldn't hire anyone with a portfolio full of broken non-semantic documents.

It's not "cutting corners". It does take more time to build an webapp that works without any JS.

If it takes more time, the final product costs more. The client should be aware of that and make the decision. Why should he/she always pay for something that will only be useful for a very small % of his/hers clients?

And who cares about people using Mozilla, no one should. They are in the vast minority, not worth the trouble accommodating for them. IE all the way.

And whilst a vanishingly small percentage of people were using Mozilla, why not? If Mozilla wanted to get people to use it, it had to be compatible with (most) of the existing web.

Similarly, I won't be testing my websites in Soguo or Yandex until more than a vanishingly low percentage of people are using them on those sites.

A minority that is growing quite fast , people are not stupid prepare for the javascript backlash like Flash hate wave. If your content is not worth it , if i need javascript just to read some text , if you dont explain why i should turn javascript on with a noscript tag ,then you'll lose me and many more as an audience.

Do you have any statistics to back up your assertion that it's growing? From what I've seen from my own anecdotal experience, as well as from access logs from past places I've worked, this minority is vanishingly small (<0.5%, even among those using screen readers it's ~1.5%) and shrinking by the day as more mobile devices support javascript.

Regardless, there is no right or wrong answer in general, there is only the right or wrong answer for your particular website/market. If a significant number of the users you want to support have JS disabled, then by all means, build a site that runs without JS. But it's a cost-benefit analysis. For most sites I've worked on, the cost of lost business due to users without JS is (massively) dwarfed by the added development cost of building a full-featured app that doesn't require JS. If that means I lose you as a customer, I'm not losing sleep over it.

How do you differentiate a user using NoScript from a bot dismissed as a browser?

If just I look at my logs, more than 90% of the access are from Firefox and don't run Javascript.

What is your website about? Is it a consumer website aimed at teens, moms, father? Or a GNU hacking blog hosted on Jekyll? CONTEXT is key here.

No we will not lose many more. The percentages who actually did something about flash very minuscule and we would lose more by not supporting IE6.

I've thought about this.

If I have a blog or similar media site, and require javascript, I might offer a 15/month option to allow access to a text only interface and an RSS feed. No graphics, no pictures, a very simple link and text interface.

And the moment I start seeing subscriptions coming in, I'll believe in progressive enhancement again.

(Progressive enhancement doesn't affect me as an application developer in the web space. But if it did...)

Realistically most of your readers will come through links and the like, and thus not be willing to pay a subscription.

If it were a site I frequent, I'd probably be willing to pay $3-4/month for a simple, clean, static html version (no need to remove the pictures though -- I can choose whether to load those client-side). Something like this, say: [http://mnmlist.com/unknown/] I'd only pay 15-20/month if I could get the entire web like that :)

Most likely, at 15/month I'd just not visit your site.

EDIT: PLEASE don't use the navigation from that site though. It's ... way too vertical.

Interesting, but I think there's probably a large intersection between users who want a text-only interface and users who don't want their browsing habits on your site attached to their name in your database.

friendly reminder that people using browsers from last century may not be your target demo

At some point recently, the browser transformed from being an awesome interactive document viewer into being the world’s most advanced, widely-distributed application runtime.

This is the key sentence in the article and this is why I was motivated to become a web developer. Recently someone asked me if I felt like I was missing out by doing most of my programming on the web since desktop apps are "real programming" and I said no because the web is the best environment for writing apps today. I don't have to choose whether I want to write for Mac OS which I use myself, or Windows which most consumers use or Linux which hardcore techies use. I don't have to choose if my mobile app is iOS or Android first. Sure there are still tradeoffs, and sometimes a desktop or native mobile app is still going to be a good choice. But the browser today is an amazing environment that everyone on the web has access to and it's only getting better. And we should be excited about leveraging everything modern browsers can do to make great software.

That key sentence is at least half wrong. The „most advanced“ part at least. And web was never the best environment for writing apps. And it never will be. It's like hoping to transform the fork into the spoon. I have spent one and a half decades in it, and it may be becoming bearable environment, but far far from the best. Unless it's the only thing you know, then it is the best by definition.

Maybe we're reading that sentence differently? I read it as "the most widely distributed runtime that is also relatively advanced." I took it as a given that Tom Dale does not believe the web is the most advanced programming environment.

Progressive Enhancement is still important for CONTENT SITES

Why? Search Engine accessibility.

It used to be that Googlebot wouldn't find content loaded asynchronously, or links that rely on Javascript. Now it's different - You can confirm that Googlebot discovers a lot of Javascript links using Webmaster Tools: https://www.google.com/webmasters/tools/home?hl=en

BUT - There's still no way to break from the "Page Paradigm" - Google needs URLs to send searchers to. They don't yet send people to specific states of a page. That's why I still use Progressive Enhancement, it forces me to ensure each piece of content has a URL that points to it.

Having URLs which point to specific resources is still 100% possible for javascript web apps. Take a look at Ember, as an example, those guys place a load of emphasis on URLs as 1st class citizens in frontend web apps.

This post does nothing to address the biggest reason people might have Javascript disabled: security. If I'm browsing through Tor (or whatever), I'm not going to turn on Javascript to use your site. If your site doesn't work without it, you've lost a customer.

Granted, people who disable javascript are obviously vastly outnumbered, but just saying "fuck you" to security conscious (and most likely tech-savvy) users seems like a mistake.

I browse with NoScript on. When I seem to need Javascript, I may:

1. Turn it on for that site. (But this can be problematic if there are 15 different domains to turn it on for, only 12 of which are about showing me ads.)

2. Use IEtab in Firefox (not working as well recently).

3. Paste the URL into a different browser. I use IE mainly for my several least favorite sites and services -- Facebook, WebEx, etc. So I'm unlike to mess things up too badly by viewing some JavaScript-heavy page in there.

4. Live without the site that demands all the JavaScript.

Now, one might hypothesize that a security-conscious, ad-hating web surfer as myself isn't a bit loss for most consumer websites. So perhaps it all works out in the end. But there also are a few companies that would pay a lot of money to have me look at their websites -- a fact that I infer from them paying PR people to attempt to get my attention -- and miss out solely because it's too hard to beat the Javascript requirement.

I think the real question is which group is larger: the customers I'll gain by enabling a feature that requires JavaScript or the customers I'll lose because they're unable to view a feature that requires JavaScript.

According to developer.yahoo.com[1],

> After crunching the numbers, we found a consistent rate of JavaScript-disabled requests hovering around 1% of the actual visitor traffic[...].

I work for a medium sized company that version tests almost every change we make, and in the end the version that wins is the feature with the higher conversion rate.

[1] http://developer.yahoo.com/blogs/ydn/many-users-javascript-d...

NoScript is the 4th most popular add-on for firefox with 2 million users. ( https://addons.mozilla.org/en-US/firefox/ )

Conversion rate is a limited metric. It misses the entire concept of word-of-mouth. Conversion rate can't measure people who don't show up in the first place.

I expect NoScript users to be more sophisticated "power users" - the kind of users that regular users go to for advice. If they aren't using your site then they will never recommend it to their more naive and more easily converted friends.

1% is a small percentage but can make up quite a large number of people.

I completely agree, but if a version test wins by more than a few percentage points, it usually makes sense to implement the feature even if it does make the page inaccessible to 1% of users.

I honestly doubt you'd be saying that if you were part of that 1%!

Except here, the 1% chose to be in the 1%.

<sarcasm>But security-aware people are negligible minority!</sarcasm>

I tried the Bustle.com, showcased in the article as a good example of a pure Javascript website, on my Android browser, and the Javascript was used to reserve 25 % of my screen to show me a "BUSTLE"-banner that doesn't go away when I scroll down.

Don't expect your users to have a mouse. The share of web users on their mobile phone has grown from 6,5 % to 17,25 % since June 2011. Any bets on what the share will be in a year or two from now? (http://en.wikipedia.org/wiki/Usage_share_of_web_browsers#Sta...)

There's no reasons sites like tumblr shouldn't work without javascript. Period. And while there are some things on the web that are genuine applications (trello, dashboards, etc.), the vast majority of things are content driven, which should never require js.

You claim facts without explaining your reasoning.. Let me put it this way using your own vocabulary:

    There's no reasons sites like tumblr *should work without javascript*. Period.
Why why!

I rather think the onus is on people like you to come up with a reason that they _shouldn't_.

Well, philosophically, I think having static content served in a backward compatible way is perfect. Pragmatically, I haven't found a good solution to do it without duplicating all the code. That by itself might be a very good reason as to why one would focus the development efforts on the 95% of users.

You say people like me. I'm more agnostic here.. just interested to understand the arguments of both side. I just think that a post saying "This is black. Period" doesn't add much to the discussion.

What duplication? Why do you need to use js to render text? I'm not saying that a user without js should expect all, or any of the bells and whistles. But to use client side js rendering to show a user a paragraph or two (like blogger), or literally a sentence (when twitter did client side rendering) is just insane. It breaks stuff, and is MORE work than just doing it the right way.

Well, without getting into details and irrelevant domain specific examples, let's say you want to load new content as you scroll down.

---------------- HTML static content way:

1. The new content will be fetched using ajax. Already some problems. From the server-side rendering, it needs to fetch it from the DB, then load it in the template. Using django, for instance, the view will fetch it and we'd show it using {{variables}}.

However, to fetch the data using ajax, it's not just the view making a query, it needs to have an API standpoint, i.e. /api/fetch-new-data/. So, already, the code is duplicated. Yes, the server-side could use the same API, but there's always the problem of returning JSON vs django-ORM-queries, etc.

2. Once the data is fetched (say in json), it needs to be rendered. How? Do you simply do a

    $.get('/api/whatever', function(data) {
        $('.some-div').append('<p>' + data + '</p>');
It's fine if it's just a <p>. But usually we'd have a more complex html and thus we'd be using client-side template. So, the server-side templates need to be duplicated. There are various ways to do it, varying from complexity, but there is clearly some duplication here. And, bear with me, it's usually not a simple .append, more javascript needs to be done which can alter the data, etc.

------------------------- Javascript way

1. Load pure html. 2. Fetch initial data and render it using client-side template. (It can also be bootstrapped since it's in the same JSON format). 3. On scroll, fetch more data, and render it using the exact same code / client-side template. ----------------------

Lastly, bear with me that it's just a simple example. It's rarely 100% static content only. I.e. there would be forms with error validation, etc. If you want to do it the no-javascript way, you need to have it submit, reload the view with validation error, etc. Only then, you can add some javascript to enhance it. Contrast that to simply validate on javascript submit. Yes, obviously, the server needs to validate it, but that's part of the api, nothing needs to be re-rendered.

All in all, if you agree that the same code would do the pre-loading and the dynamic real-time stuff, you usually save yourself lots of headache and complexity.

I believe blogger had that problem as they show different views for the same content. When you switch between views, it dynamically updates it in javascript, rather than doing a page reload. Could it be made better by having a html/css content for no-js browser, sure thing. Would that duplicate the code? Sure thing. And again, there are different varying of complexity. Maybe that example wouldn't be too complex, but it's more code to maintain, to test, etc.

It's a bit late here, but hopefully you understand what I mean. And by the way, I'm still not 100% certain about what's the best approach. I know that I used to be pro html static first for backward compatibility and no-js users, and only javascript to enhance the page. But recently, I've tested it by doing it in javascript on the client and it's seriously so much faster and cleaner.

And yes, there are some good frameworks to deal with the duplication, but it adds lots of complexity. For instance, see airbnb Rendr which try to solve that exact problem that I'm talking about. I.e. being able to load backbone.js on the server during the pre-rendering stuff.

... the browser transformed from being an awesome interactive document viewer into being the world’s most advanced, widely-distributed application runtime.

If only that were actually true. In reality, we're designing the interfaces for these applications using a presentation language made basically for desktop publishing. For interactivity, we essentially have one more or less shite language (http://bonsaiden.github.io/JavaScript-Garden/) to choose from. We're still arguing over the very basics on whether we should use callbacks, promises, generators, etc. for simple sequential operations. Hell, we're still trying to figure out how to get a reasonable call stack record to debug when working with any of these options. And God help you if you want to use a modern language that compiles to Javascript and have your debugger too.

But to address the author's original point, I think progressive enhancement is alive and well. While the majority of browsing is done on the desktop, I just think it makes way more sense to think first about presenting your basic content and then enhancing it than how you're going to strip out all the bells and whistles to get your design across on less capable platforms. In the long run, the former will probably save you more time and QA effort. It's just more natural to think about using capabilities when present then working around their absence.

And no one says your baseline should to a screen reader for all possible web apps. Just pick a the baseline that makes sense for what your doing, and enhance from there. At some point, it may make more sense to fork your platform and have separate implementations for different pieces of your interface. It doesn't have to be one monolithic project that magically enhances from mobile phone screen reader all the way up to VR cave.

I disagree strongly. Concisely:

JS has many potential UI/UX benefits which should be used for the users' benefit: although they can also be used to users' disadvantage.

If your (static?) website shows blank with no-JS, I find it unlikely that you've considered UI/UX at all. I therefore assume that you are more likely to fall on the disadvantageous side.

I agree with all of his points. I think a lot of the counter-arguments are centered around "Yeah, but I see websites that unnecessarily use Javascript when a simple text-based solution will work". That's not a Javascript problem; that's a site-design problem.

I'm sure you could make a dynamic page that has a negligibly different loading time compared to a static page that both display similar (static) content, but it's the way that you do it that matters. Loading a page, that loads a library, that pulls in another library once the page is loaded, that then displays spinning gears while pulling in a bunch of static content is of course the wrong way to do it for a lot of things. But that's a design problem.

First of all you still get html in the end. So whether you get the slightly reduced json payload size and use the users cpu to generate html or just get cached html from the server you still end up with the same thing, and arguably, similar response times.

I find the answer is not one or the other it's both. If a certain page requires interactivity then embrace Javascript and do the interactivity with Angular or Ember. You end up writing less Javascript. If you do it as decorating html using jQuery then you will end up with more javascript. Most pages in web apps don't require this much interactivity on every page though. There may be a few pages here and there. Most of it is just document viewing. In that case just send down cached html. Sure Bustle.com is fast, but so is Basecamp, both take entirely different approaches to display pages.

When I first got into Knockout a few years ago, I was a kid in a candy store. I wanted to do everything with Javascript and Knockout. Soon I grew up and realized you just don't need all that crap to display an f'in table. It's just a table for God's sake. We have been displaying tables since the dawn of web browser. In fact you will pay client CPU cost trying to display a table in Angular when you could just send it over in HTML.

Now if that table requires heavy editing(not filtering, or sorting, that stuff is easy as decoration), then sure bring in Angular.

On the other hand if I need drag and drop, validation, on complex forms, I'll definitely bring in Angular.

Choose the right tool.

Stating that one approach or the other is always the right way is the problem. Figure out which one works best for the type of site you are working on.

How your site will be used is often a high level indicator of which approach will provide a better experience for your users. Gmail for example, no public part of the site, not uncommon for users to leave it open in a tab all day. Often great for all Javascript approach.

Twitter on the opposite end. Lots of public facing pages, performance was worse when they required Javascript just to render 140 characters on the screen. This style of site is generally better off with a progressive enhancement approach.

Great, I'm going to be debugging other people's websites in my spare time as well as at work.

I experienced this recently. I had to look through the source of a site because their image gallery javascript was broken.

I was already looking at a gallery of thumbnails... how hard can it be to make each thumbnail a link to the image in question, and at runtime attach a javascript handler to open the image in a pop-up or whatever?

A great javascript app is wonderful thing but if you fail, you tend to fail hard.

A HTML/CSS page + progressive enhancement tends to involve much less shooting-oneself-in-the-foot.

If you've got the talent, time and budget to do it well (note that is 'AND' not 'OR'. Gawker being a case of 2 out of 3 not being enough) then please go ahead.

However - if you have any doubts about your ability to see the whole thing through to perfection then a half-assed website is much less awful for your audience than a half-assed app.

If you live somewhere with a decent internet connection and don't travel often you may have forgotten that lots of places still have slow or unreliable internet connections. Most of those people probably have JavaScript enabled too, but every extra request required to use the site is a point of failure.

I'll be the first to advocate requiring JavaScript when doing so significantly increases value, but for content sites please at least include the main content directly in the HTML.

I pretty much agree with all of the points in this article. I do wonder, though, why Bustle.com (the example used) is an Ember.js app and why it displays nothing but a blank page if Javascript is turned off. Skylight makes perfect sense as a full JS app. But Bustle, a content site, seems to be more of an "interactive document" (as he mentions).

Bustle actually uses PhantomJS to create static versions of the site, for serving to search spiders.

The advantage of using Ember.js for Bustle is that it's really, really, ridiculously fast. Seriously, try it. Go to bustle.com and click around.

They could make it work without JavaScript, but they're a new company with a long list of technical challenges to solve. They ran the numbers and the percentage of users with JS disabled is so microscopic it just doesn't make sense to spend time on it.

"Bustle actually uses PhantomJS to create static versions of the site, for serving to search spiders."

What value dose using PhantomJS offer above what progressive enhancement gives you for free?

This feels like a contradiction. Avoiding progressive enhancement, and then bolting on a hack for search spiders in a way that is less robust than the technique you're avoiding.

"The advantage of using Ember.js for Bustle is that it's really, really, ridiculously fast. Seriously, try it. Go to bustle.com and click around."

The initial load is horribly slow. For example: http://www.bustle.com/articles/4549-pew-poll-american-people... took 10 seconds. That's very slow for a one page article.

That's gonna hurt people following links to the page. That's exactly the reason why Twitter walked away from it's JavaScript driven content and went for progressive enhancement. A substantial portion of new visitors will have a cold cache, and have this annoying wait for what is effectively a one page article. (cf. http://statichtml.com/2011/google-ajax-libraries-caching.htm... )

"They could make it work without JavaScript, but they're a new company with a long list of technical challenges to solve. They ran the numbers and the percentage of users with JS disabled is so microscopic it just doesn't make sense to spend time on it."

Another contradiction. Progressive enhancement isn't a technical challenge, it is so brain dead simple.

So they figured out the number of users with JavaScript disabled is too small to warrant supporting. Interesting, except, progressive enhancement isn't solely about people with JavaScript disabled, right? cf: http://isolani.co.uk/blog/javascript/DisablingJavaScriptAski...

Also, so they ran the numbers of these, under the incorrect assertion that progressive enhancement only impacts people with JavaScript disabled. But did they run those same numbers that determined the number of search spiders wasn't microscopic, to justify the PhantomJS bodge you initially mentioned? Are you really implicitly asserting that there are far more search spider visitors to bustle.com than visitors with JavaScript disabled? (I'd love to understand the logic that lead to that determination).

So, what justification was there to spend time on building a PhantomJS site scraper to provide static content to search engine spiders, and yet fail to appreciate that progressive enhancement would have served that spider audience, as well as the JavaScript audience, as well as the variety of issues that progressive enhancement helps alleviate?

But, if bustle.com were a content site, then progressive enhancement is the way to go, right? But this is an ember app, so it is not a content site (clearly). Except when search spiders visit, then there's a need for a static version of each page. It's very confusion. is bustle.com an app or a website?

Also, how does bustle.com / ember, guarantee perfect delivery of assets other than HTML to a visitor's browser? How does it guarantee robustness?

For example, when using a CDN, how does it manage when this happens: http://www.theregister.co.uk/2012/01/05/google_opendns_clash...

How does bustle.com / ember protect your JavaScript so that when a third-party chunk of JavaScript (like Google Analytics, Disqus, Facebook, ChartBeat, Quantcase, WebTrends) does something funky, or hiccups and causes a JavaScript error?

Javascript is great for making more efficient sites. Here's why:

1) Static resources can be cached on a CDN and composited on the client instead of an overloaded app server

2) You can load data instead of heavy and repetitive HTML over the wire

3) You can cache the data in the client and re-use it later, making for snappier interfaces

That said, you have to watch out for URLs. Just because you can write everything with javascript doesn't mean you should break URLs. And of course, crawlers other than google's crawler will probably not be able to execute your JS.

I'm actually going fairly tired of sites trying to push everything into the client to free up their servers. I routinely run into sites that easily peg an entire core and/or use > 1 GB RAM. A gross generalization, but if you're not able to tune it on the back-end, you probably can't tune it on the front-end, and you're still making your users suffer.

1 has been possible since the beginning of time. Every modern browser as cache built-in.

The browser requests CDN resources directly and Javascript is used to combine those static files into something that the user sees. Since the code is executing on the client, the app server is not overloaded.

1/ Because you cant cache static resources on a CDN without javascript ? 2/ Because you cant load light html over the wire ?

3/ Because you cant cache html pages client side ?

Are you serious?

1. The browser requests CDN resources directly and Javascript is used to combine those static files into something that the user sees. Since the code is executing on the client, the app server is not overloaded.

2. Suppose you have a list of 20 items. You could either output HTML with all the tags already rendered for each of the 20 items, or you could output a JSON array with 20 objects, which is much less on the wire.

3. If you are rendering stuff on the client and only fetching data for it (which is what modern Javascript MVC frameworks let you do) then you can cache that data and re-use it. Caching entire HTML pages is at a much lower level of granularity that caching each piece of data you get from the server, and reassembling it in different ways.

1. Use a CDN that supports ESI, like CloudFront.

2. It's not that much of a difference, after gzipping, which you surely already do.

3. See number 1.

Moreover, in the end, you'll get a less heavy document to be processed by the browser. I swear my all bells and whistles i7 X1 Carbon still struggles on some javascript-heavy sites.

Please do elaborate on 1 :)

I like to save every document I read online (including discussion, if possible, these often contain insightful posts). Maybe it's obsessive, but I do it.

Some documents, especially those using Ajax for loading content or multiple pages, make this difficult. I hate them. (Hacker News, oddly, does it too - when a discussion is archived, it becomes paged, which makes it more complicated to store.)

I wish there would be a standard way to store page offline, including all the JS changes made to its looks, all the external content etc.

Lots of absolute views in this thread.

How about: If it's profitable for your site to offer a non-JS fallback, do it. If it isn't, don't.

There are other things besides personal financial profits. Think of them as taxes to maintain Internet working and flexible.

If you don't pay that tax, you're personally contributing to a future when simple documents become full-fledged programs, and everyone lose abilities to easily manipulate and interact with them in any but author-defined ways.

There are cases where you can be exempt from that tax - if your site is not about documents, but their transformations, i.e. it's more of a process, not data. (Then you should call it app, not site). But vast majority of sites isn't.

Are there good tools out there for helping to ensure that a JavaScript web app without progressive enhancement is accessible to the disabled (e.g. screen readers can parse it, speech recognition software can interact with it).

I ask because I've recently discovered that Google has massively failed in this department with some of their products, at least as far as speech recognition is concerned. Google Docs is a great example of what I'm talking about. If you try to use it with Dragon NaturallySpeaking, buttons and menu items are often not recognized, text entry is only reliable by using the separate (and inconvenient) Dragon "dictation box", editing is a nightmare, and review comments can only be placed by actually copying from a separate program. Your best bet if you need to collaborate is honestly to just use Microsoft Word, and then either upload and convert, or copy and paste, and then accept the fact that a lot of collaboration tools won't be usable by you or any of your collaborators.

I can't imagine how frustrating it must be to try to use modern web apps as someone who can't type effectively or read a screen, and it seems like the problem is only going to get worse as people rely more on canvas without taking accessibility into consideration.

While I agree that you can assume Javascript being enabled I really think that "conventional" web development has still many advantages over making SPAs.

Business logic on the server, HTML generated on the server, conventional mvc-architecture, use ajax and push state to make it highly interactive.

Fine - you can assume JS being available, but from that it simply does not follow that you have to throw away the traditional (rails style) dev model of the web.

You can have your hypermedia on api.<yourdomain> and then your AngularJS or whatever on app.<yourdomain>, consuming your api. Then all you need to do is serve HTML from your api when the User-agent accepts html (bonus points: set a canonical meta tag pointing to your foshizzle app so you don't lose SEO).

Best of both worlds.

PS: All this crazy talk stems from the fact Javascript created an apartheid on the web. We need to make a clear distinction between the HTTP-web and the Javascript-enabled-web. The fact the same software (browser) serves this dual purpose adds to the confusion and allows bad architecture decisions, interwinding content and rich interfaces inside the same hypertext mudball.

Well I guess you wouldn't want your monitoring tool indexed by Google anyway, right? As soon as you only have JavaScript as the only way for accessing some data it will be harder to index. There are some solutions provided by Google https://developers.google.com/webmasters/ajax-crawling/ but the situation still is not satisfactory.

For my clients it's usually the case that being found well in Google is a major part of their business case. PE makes sure that a basic crawlable version of your website exists with proper titles and tags.

Not a single mention of SEO on the article. I guess Google is dead too.

Try viewing meta.discourse.org without Javascript on then look under the hood - it's a pretty easy problem to solve with some <noscript> tags. Other approaches use PhantomJS to create a "rendered" copy that is accessible to primitive clients/scrapers.

> Other approaches use PhantomJS to create a "rendered" copy that is accessible to primitive clients/scrapers.

Then your site does not require client-side JS support.

The point is not about what technologies you use to produce documents, the point is what data you [can] serve.

It wasn't mentioned because that particular fallacy has already been beat to death.

Take a look at Discourse or Bustle. They're pure Javascript apps that are still SEO-friendly.

No one said it can't be done, but it's far from trivial, hence my surprise in its omission.

And the Phantom.js approach mentioned below seems more like a workaround to me than a solution

Only if you never need any referrals from search engines. I know a site that has this awesome locally sourced food delivery/pickup system. Connecting consumers directly with the growers.

Their site is 100% in JS. And if you google for anything even remotely close to what this site sells you simply cannot find them.

Unless you are a members only app site would I say progressive enhancement is dead. Well that is unless you care about the millions of users on slower mobile connections with crappy smart phones.

    It’s a myth that if you use a client side MVC framework
    that your application’s content cannot be indexed by
    search engines. In fact, Discourse forums were indexable
    by Google the day we launched.

They are probably a good case to follow, if you want to see what people's experiences with SEO for JS-based services are.

Why do people who are against server side templates always provide "well, just use server side templates and client side rendering both!" as a solution? Hooray, I can write my app twice for no reason! Way to sell me on pure javascript for delivering content.

That's a developer problem, not a Javascript problem.

No that's a javascript problem which makes things more complicated for little gain. With progressive enhancement you dont need all these phantomjs stupid tricks.

I'm ready for JavaScript sites ... but notice a very important nugget in the authors post, "web apps need to have good URLs". This cannot be overstated.

1. Stop preventing middle and right clicks on JavaScript enabled links. For left clicks, sure ... control the flow.

2. Respect the fact that this is NOT a desktop environment, therefore my view of your program's "screens" should be on a per URL basis. I actually might want to view a list you generated in my own separate "window" or "screen" with the URL visible, usable, and savable in the browser.

Wow, the size of the Ember app vs. Typical webapp + JavaScript is impressive.

Inspecting the Bustle app with the new Chrome Ember Inspector is very cool.

Has Bustle open sourced any of their components or written on how they developed the app?

One of the engineers wrote a little bit about it on Reddit[1] awhile ago.

[1] http://www.reddit.com/r/webdev/comments/1kf84d/bustlecoms_sp...

I doubt devs are "ashamed" of making js web apps. The main issue is that it takes more effort to do so than a traditional web app. There are browser specific quirks. Frameworks like rails are so well-integrated with the db layer that it will be difficult to match that productivity with pure JS apps. And finally, a lot of devs don't want to take the time to learn, when the current standard is perfectly acceptable for most use cases.

140 comments in and no mention of TurboLinks? You hit a URL, you get the content. You click a link, it just gets the body and replaces that: no reloading the CSS, or JavaScript, so content gets rendered faster. Search engines get the actual content from the URLs, users get the speed. Issues having to do with making your scripts idempotent are problematic, but it certainly sounds like a good base.

The problem is not Javascript per se. Personally, I like a good webapp.

What annoys me is the tendency of Javascript guys to rebuild every damn application there is as a webapp, and rave about it like it's the best thing ever. Javascript has become their hammer, and the whole world looks like it needs a good pounding.

Just because you can doesn't mean you should.

For me, the real indicator that progressive enhancement is "dead" was that as I began reading the post, I was left wondering "what the &*%^ is this progressive enhancement that he's declaring as dead?" until, pretty much, when I finished the post.

I still teach both "graceful degradation" and "progressive enhancement" to students in my "Web101 - Introduction to Web Development and Design"...

This post doesn't address the main reasons for PE.

- Accessibility - Spiderability by search engines

If you want to say PE is dead please explain how these don't matter to most websites.

Javascript is cancer.

Yeah, i like to split the world in 2: web pages and web apps. For web apps, i don't hesitate to assume javascript.

the one thing that i think is still kind of uncovered ground for javascript frameworks is proper i18n and l10n support.

Has been dead for a while. The people who complain on HN about "I get a white page" are treated like trolls.

you are the troll obviously, calling people that disagree with you trolls.

Edit : actually, I'll make an article out of this, because I came late in the discussion (I'm from european timezone) and the message probably won't be heard.

People that consider app should be usable entirely without javascript certainly miss the point. So do people that consider progressive enhancement is only about supporting people that deactivated javascript.

As author mentioned, the browser is now more an execution environment rather than a document viewer. You know what it means ? It means that developers have no control over the execution environment. With server side, if it works for you, it works for everyone. With client side, you'll never know. You don't know what extensions your user use. You don't know how stable his system is. You don't know how stable his connection is. And you can't ask your users to have a such carefully crafted environment as your servers.

What this should make us concludes is that the most heavily your app rely on javascript, the better it should be at error handling.

How do you handle error in javascript ? If an error occurs in a callback function, clicking that <a href="#"> again and again will simply do nothing, and your user will get frustrated and yell : "it does not work !".

With progressive enhancement and graceful degradation, it suddenly becomes simple. Your link has a real href. You can deactivate all event handlers using the event "window.onerror". That way, clicking a link after a crash will follow it.

You even don't have to implement the feature totally on server side. If your client side feature can be emulated on server side, do it (and your user won't even realize something went wrong) ; if it can't, simply warn your user about it. Anyway, javascript runtime will have been reinitialized.

So, for all of this to work and make sense, we just have to use modern definitions :

* progressive enhancement is ensuring no link / button / whatever would "freeze" if javascript crash

* graceful degradation is ensuring interface get reversed back to an useful state when an error occurs (like, showing again submit button for what was ajax forms). This can easily be done if your page is composed of objects that respond to some kind of destructor method.

If you think client side errors do not happen that much, just put something like that in your code :

    window.onerror = function( error, url, lineno ){
      $.post( my_exception_url, error: { error, url: url, lineno: lineno, page_url: window.location.href );
This will post all exceptions to a server side url, where you can relay it to your exception management system (I simply raise a system exception using the passed parameters) or store them. You'll be surprised.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact