Hacker News new | past | comments | ask | show | jobs | submit login
Breaking the web with hash-bangs (isolani.co.uk)
287 points by Terretta on Feb 9, 2011 | hide | past | web | favorite | 94 comments

I accept the brittleness of a content-oriented site switching to a javascript-driven SPI implementation, however I'm confused by the assertion that the strongest reason to switch is because it's cool. I can't speak for Gawker, but some sites may switch because they want the browser to do more of the rendering work, such as rendering templates. Others may want to avoid the overhead with a full page refresh as users navigate the site.

The author clearly dislikes what's going on, and the post would be stronger if he simply stated the disadvantages and let others speak to the benefits rather than putting up a strawman and claiming that people implement this sort of site because it's cool.

       Others may want to avoid the overhead with a full page 
       refresh as users navigate the site.
Avoiding a full-page REFRESH only makes sense when the current page does NOT change (e.g. doing login).

Speaking about overhead, to avoid bandwidth problems related to page-switching 90% of websites out there would be better off if they just added the Cache-Control on their static assets or GZip the content sent (yes, it's kind of ironic that so many developers ignore common-sense techniques already available in the HTTP protocol).

Also, if the overhead is in the server-side processing done, then offload that processing making it async and cache the results. Why is that so hard?

For me Twitter's new interface is slower and more cluttered than the old one, without providing the benefits of the iPhone client I'm using. It's a disaster IMHO, so speculating they only did it because it's cool is not unreasonable.

Good, healthy observations, thanks.

Avoiding a full-page REFRESH only makes sense when the current page does NOT change (e.g. doing login

User navigates from "A" to "B" Some or even most content on the page changes. Scenario 1: New page is fetched from the server. Scenario 2: Content is fetched from the server via AJAX and rendered with a template in the browser, then injected into the current DOM.

I assert there are technical benefits to the second scenario. Whether they outweigh the disadvantages is not my argument. My argument is that there are technical benefits and that "because it's cool" is a straw man argument.

if the overhead is in the server-side processing done, then offload that processing making it async and cache the results. Why is that so hard?

Yes, and one such strategy is to offload it to the user's browser. Again, I am not arguing it is the best strategy, as I am not responsible for Gawker Media or your application.

Didn't the article author assert that you can use click handlers on your links to achieve AJAX loading without a full page refresh?

Indeed, but then the URL doesn't change, so the page can't be emailed, tweeted, etc. The hash-bang allows you to change the url, use AJAX to change the content, and retain browser history.

The history API in HTML5 allows you to accomplish the same thing without the hash-bang, but browser support is limited. I favour using the history API for progressive enhancement and defaulting to traditional page requests.

+1 for history API.

It's unbelievably neat to transition between pages without ever redrawing the full screen. Pity there won't be any IE9 support.

Yes of course he did, however every design decision has trade-offs. Using click handlers has some costs and some benefits. Using URL fragments has costs and benefits.

I'm not disputing that the author has proposed a way forward or that the author's suggestions about the costs of using fragments are mistaken, I am only disputing the argument that there is no reason to use url fragments besides "coolness."

It depends (of course). First of all, markup wise page content can take only a small part of the whole page HTML. Navigation, widgets, etc. can be substantial.

Also, you'd want Expires, not Cache control. Otherwise you may end sending dozens of request just to get 304 back.

Keeping that in mind, sending single AJAX request which delivers _only_ content seems quite beneficial compared to full refresh scenario.

some sites may switch because they want the browser to do more of the rendering work, such as rendering templates. Others may want to avoid the overhead with a full page refresh as users navigate the site

Both of these can be achieved using progressive enhancement. There is no technical reason to include it in the page itself.

there are technical problems with progressive enhancement, the latency between page load and progressive enhancement being applied being a fairly obvious one, the other is the providing full html fallbacks combined with progressive enhancement can be very complicated and time consuming, its very easy to see time when time to live wins out over semantic correctness

hopefully html5 pushstate will alleviate a lot of the complications with providing full fallbacks

As well the elephant in the room that gets missed is that many rail about the technical correctness without taking into account the target market. Personally I build web applications that replace desktop applications. Progressive enhancement would kill the simplicity of our programming model and literally triple the work effort because we would be in the hybrid world of half server side and half client side.

As it sits, we use a full client side UI library and push all UI concerns into the UI, they communicate with the server via REST to get data and the client is responsible for rendering that data. We research our numbers extensively and the cost analysis of changing our development model to support clients that cannot support full JavaScript clients supports the conclusion "at least for us" that a progressive enhancement development model is not cost effective.

The money recouped with the shortened development cycle of Ajax clients outweighs the amount we would receive chasing some fraction of < 2% of potential clients, If we get to the point that we need to chase that 2% for growth we are a) doing great and b) would be better served segmenting that traffic to a completely independent server framework based UI that it built for those clients.

It is my belief that mixing the two development models gives you the worst of both worlds when it comes to features and maintainability.

the latency between page load and progressive enhancement being applied shows up primarily if you load your JavaScript at the bottom of the page. Now I know that this is what Yahoo!'s performance rules suggest, I was on the team that made those recommendations. They made sense between 2006-2009, but browsers have improved significantly since then and that rule isn't necessarily true any more. You can load scripts in the head of your document without any of the blocking issues we saw in the past. You can even do multiple scripts in parallel to reduce download latency. See my <a href="http://calendar.perfplanet.com/2010/thoughts-on-performance/... on performance</a> on the 2010 performance advent calendar.

Interesting, thanks. Does "browsers have improved significantly since [2006]" mean that we still need bottom scripts for IE?

IE6 is turning into a C grade browser this quarter (http://yuiblog.com/blog/2010/11/03/gbs-update-2010q4/) so you can probably just not serve JavaScript to it (server side detection). IE7 still doesn't do parallel script downloads, so you might need to do bottom loading for it, but it's best to first find out what your users use before optimizing for it. You can get really complicated and change your page structure based on useragent and user's network characteristics, but that requires far more development effort.

Gawker redirects IE6 to their (crappy) mobile site, for what it's worth.

Nice remark; thanks!

Full (unbroken) link: http://calendar.perfplanet.com/2010/thoughts-on-performance/

The author does his point a disservice by being under-informed, imo. I agree that Gawker's redesign is horrible, and share his hate of needless javascript (why re-implement half the browser stack?), but his suggestions for improvement would actually be a reduction in functionality.

He suggests using real URLs - instead of "/#!5753509/hello-world-this-is-the-new-lifehacker" we would have "/5753509/hello-world-this-is-the-new-lifehacker". Sounds great, but what happens when you click to a new article, which is loaded with AJAX?

> Engineers will mutter something about preserving state within an Ajax application. And frankly, that’s a ridiculous reason for breaking URLs like that.

Is it? Let's try things all possible ways (with the assumption that content is loaded via AJAX) and see what happens. Firstly, the current state:

Address bar: "/#!5753509/hello-world-this-is-the-new-lifehacker", click on a link, now it's "/#!downloads/5753887". I can bookmark this, send the link to a friend, etc etc.

Option two is to not track this at all, after all preserving state isn't that important, right?

Address bar: "/#!5753509/hello-world-this-is-the-new-lifehacker", click a link, it doesn't change. I can't bookmark the new page, I can't send the link to a friend, my back button won't work properly.

Third option is to use real URLs as he suggested... we need to track state (otherwise we hit the same problems in option 2) so let's try that: Address bar: "/5753509/hello-world-this-is-the-new-lifehacker", click a link, it now contains "/5753509/hello-world-this-is-the-new-lifehacker#!downloads/5753887". Oops. That's even less clean.

Conclusion: If you must load content with AJAX, using URL fragments to track state is the most functional and cleanest-URL option available.

"The author does his point a disservice by being under-informed, imo."

I admit my Google searching turned up no useful results about hash-banging URLs for new websites, only Google's attempt to index sites that ignored progressive enhancement and hid all content behind Ajax calls.

"Sounds great, but what happens when you click to a new article, which is loaded with AJAX?"

If you want to load the content in via Ajax, you start with a valid and direct URL to a piece of content. You can then use JavaScript on the click handler to mangle the URL into a hash-bang, and then proceed as you wish. You get all the benefits of hash-bang without the downside of uncrawlable URLs.

"now it's "/#!downloads/5753887". I can bookmark this, send the link to a friend, etc etc."

* Bookmark it to a third party service - yes (assuming they don't truncate fragment identifiers!) * That service then retrieves the content of that article (for summarising, entity recognition, extracting microformatted data). no.

The hash-bang URL is not a direct reference to the piece of content. The indirection shenanigans isn't part of HTTP/1.1 or RFC-2396. So it's not Restful for starters.

"after all preserving state isn't that important, right"

That can be done without making all links on the page non-traversable with a crawler.

"Conclusion: If you must load content with AJAX, using URL fragments to track state is the most functional and cleanest-URL option available."

a clean URL would not be an indirect reference that would need to be reformatted into a direct reference before being curled.

State can be done without mangling URLs in this way.

> You can then use JavaScript on the click handler to mangle the URL into a hash-bang, and then proceed as you wish

I wish, unfortunately this is only supported in the very latest browsers (I've literally seen one website do this [1]). You can only change the portion of the address bar following the "#" with javascript, which is why you see URLs like "/content/30485#!content/9384" in some places.

The short of it is, if you insist on using AJAX to load your primary content, and to power your primary navigation, then the way they do it is the best method at our disposal at this time.

Personally, I don't think the best method is good enough, but right now there is no better solution - imo, the only way to avoid this nastiness is not to load your content using AJAX in the first place! I don't see the benefit in throwing away half of HTTP, all of REST, breaking inter-operability and SEO, requiring javascript, and having ugly URLs, in order for... what? It's hard to see the plus side to be honest, this is categorically the Wrong Way to make a website and I wish people would stop doing it!

[1] http://www.20thingsilearned.com/

"I wish, unfortunately this is only supported in the very latest browsers (I've literally seen one website do this [1]). You can only change the portion of the address bar following the "#" with javascript, which is why you see URLs like "/content/30485#!content/9384" in some places."

Okay, you are clearly missing the solution here.

This is what the html looks like: <a href="path/to/page">Page</a>

Then add a click handler to the link (using your preferred JavaScript library): $('a').click(function(e) { this.href = "#!" + $(this).get(0).href; });

So this code mangles your URL into a hashbang with JavaScript when the link is clicked. Now the rest of your code remains exactly the same, and updates the window.location.hash the same way as before - it gets the same value as before.

Except the benefit is that with no JavaScript - like a crawler - sees a working URL. And the JavaScript then mangles the URL appropriately, leaving your framework blissfully unaware of the difference.

This is progressive enhancement - using JavaScript to mangle links that are only meant to work when JavaScript is available (and in Gawker's case, make it to the browser). If you're quite confident in your JavaScript skills, you can mangle all the links on the page in one go right at DOMReady.

The key is not expecting JavaScript to be running. And when JavaScript is indeed running, then mangle links to your heart's content.

Ah yes, I did misunderstand what the OP and you meant by that. Yes, progressive enhancement would be preferable, I (almost) always try to use the PE approach myself.

The problem with having this mix though is that someone could visit "/45382/some-content-url" with JS turned on and then click a mangled link - taking them to "/45382/some-content-url#!94812/some-other-content" which is absolutely horrible.

As far as I'm concerned, the only real solution is to not use javascript for loading pages rather than, you know, invoking page loads. I really don't see why this method is beneficial at all, there is nothing of merit in the design or functionality of Gawker in my opinion.

Along these lines, one pattern is to build a widget or page that renders in straight html and has links to take you to pages to perform actions. If javascript exists, scoop up the html and introspect it to build your rich widget. If the javascript never loads, you have a working page.

This, I believe is a good use of custom data attributes. When your JS introspects the html, it can use data here for configuration and setting values. It's messy and often impossible to use the same values you output to html in your javascript models. (For instance you might have a Number with arbitrary percision on your model, but only spit it onto the page with decimal points.)

But, yeah sure. It's a some extra work and planning we don't always have time for.

> He suggests using real URLs - instead of "/#!5753509/hello-world-this-is-the-new-lifehacker" we would have "/5753509/hello-world-this-is-the-new-lifehacker". Sounds great, but what happens when you click to a new article, which is loaded with AJAX?

I would deliver the page with plain old HTML links to other pages. On page load I'd use JavaScript to see if the new pushState[1] API is available, and if so fetch the JS required to update the page dynamically and then update the link elements to do the AJAXy thing. Seems like it would degrade gracefully in old browsers and keep the URL current in new browsers.

[1] https://developer.mozilla.org/en/DOM/Manipulating_the_browse...

edit: is sjs382 my doppelganger?! (or vice versa)

Yes, if I were forced to use JS for loading the primary content I would do that too. There are a couple of options for doing so if you don't want to roll your own (and you don't - it is actually a very complex thing to get working cross browser), JQuery History [1] which is nice and simple, and JQuery BBQ if you need a bit more control.

Personally I think it's a little early for pushState, and degrading to #! is really ugly, so for now I would say just use normal page loads like everyone else unless you have a really good reason (good reason: GMail. bad reason: glorified blog.)

[1] http://tkyk.github.com/jquery-history-plugin/ [2] http://benalman.com/projects/jquery-bbq-plugin/

Agreed. I meant degrade to regular URLs, not the #! thing. IMO you don't need the #! for something as simple as switching articles.

edit: is sjs382 my doppelganger?! (or vice versa)

Haha, not at all. Just my old university ID since "sjs" was taken here ;)

sjs can be a hard handle to get. had to resort to _sjs on twitter, and that was in 2007 :)

Well now you know who to blame :^).

Not me. :p I'm sjstrutt on Twitter

He suggests using real URLs - instead of "/#!5753509/hello-world-this-is-the-new-lifehacker" we would have "/5753509/hello-world-this-is-the-new-lifehacker". Sounds great, but what happens when you click to a new article, which is loaded with AJAX?

Well you've already confirmed that the client is running Javascript so:

1. Use JavaScript to rewrite the existing clean URLs into JS function calls (or however you want to handle that)

2. Use window.history.pushState to push the clean (canonical) URL for the page you're viewing to the browser.

I'm not sure why this isn't obvious...

history.pushState() and history.replaceState() are introduced in the html 5 spec, so not all browsers support it yet.

Test for the API and fall back to the hashbangs if it isn't available. Check out this example in an old browser and a modern one:


My point was the degradation to hashbang (which is what most users are going to see) is absolutely horrid, to the point that I would actively avoid it.

You're concerned about /content/123/#!/content/456/, right?

Well, if the client isn't modern, 301 them to #!/content/456/ and proceed as you would.

Either way though, all this is terrible for a blog, or even Twitter.

The author also completely misses the business case: putting a single gateway through which Google can index makes sense for huge content generators. Sites like Twitter actually have some leverage with Google, as the Twitter firehose is more "real time" than Google's index. Using the #! allows Twitter tight control over Google's access to content. As for Gawker, I suspect they overestimate their leverage with Google, and have no similar business case.

"The author also completely misses the business case: putting a single gateway through which Google can index makes sense for huge content generators."

That's called a domain name. www.lifehacker.com is a single gateway to all of lifehacker's blog posts. Then you use the path part of the URL to specify which post or collection of posts you want.

Twitter hasn't leveraged anything. The tweet URLs without the hash-bangs were more accessible to Google.

You want tight-control as to which user-agents can see which set of tweets - that's what the Controller is for on the server.

sitemap.xml and robots.txt offer much, much more control over Google's crawling, and require essentially no dev work.

And can be ignored...

The hashbang technique could be easily worked around by a company with the resources of Google, and the PR ramifications of GoogleBot violating robots.txt would be pretty big.

Looks like gawker sites show up pretty much empty with javascript turned off, and on top of it there's no warning that it has to be turned on? What ever happened to graceful degradation? Give me 1996 text only if you have to, but please, don't break the web by forcing javascript. Even Gmail, the poster child for javascript/Ajax detects that you have it disabled and shows you an HTML-only version.

Can someone explain to me why a site like lifehacker even needs javascript at all, let alone so much? It's a blog, presumably it's value should come from the value of it's textual and pictorial content.

It's just nuts that I have to burn so much CPU just to read some text.

Parkinson's Law: "Work expands so as to fill the time available for its completion."

And if you think that doesn't sound like a good reason, you'd be right.

The web is evolving to provide media experiences that are lower latency and much more exploration oriented. No page reloads and greater interactivity are the first step.

All of that adds exactly zero to the actual content of the blog. You know, 'content', the stuff I actually visit blogs for.

Furthermore, I've never been annoyed at page reloads while reading a blog. Now their shitty transition crap?... There is nothing "lower latency" here.

>All of that adds exactly zero to the actual content of the blog. You know, 'content', the stuff I actually visit blogs for.

Who cares? Only you. Gawker cares about lower latency = more pageviews per visit = more $$$ from ads.

It's not lower latency, it's hidden latency. If you are one to be randomly amused and distracted by shiny things, perhaps you'd like it more.

Most cost analysis show client based UI as a simpler and more cost effective development model. Developers have more control over the UI and do not have to go through as much routing to accomplish a user action, this directly effects time to market as well as ongoing development and maintenance costs. Many organizations are switching for development and time to market costs alone, not just because it is cool, as the original author put it.

Sending a e-mail does not add any real 'content' over telegraphing the same message. It's just a change of presentation layer. I still would rather send e-mails.

And the same way I would rather create JS-driven sites. It allows me to be more creative with the presentation, while staying in the comfort zone of HTML/CSS/JS. Offloading work from my servers to the computers (that, based on my experience, are near-idle while browsing) of end users is a nice benefit.

Oh, and I would disagree about the 'There is nothing "lower latency" here' statement. I visited lifehacker.com out of curiosity, and noticed something: with JS navigation, I could scan headlines in the sidebar while the article was loading, something I would not be able to do while a 'traditional' page was rendering.

Of course, it's not a general rule. There are projects where you support IE6/7, and then there are projects where you don't give a damn whether it works for anything but edge versions. Same with using JS as presentation layer.

> with JS navigation, I could scan headlines in the sidebar while the article was loading, something I would not be able to do while a 'traditional' page was rendering.

...Wha? You can make traditional pages that display progressively just fine. In fact that's the default mode of operation for HTML and was supported as far back as Netscape 1.0. What's it with people, why do they keep forgetting about sensible architectural decisions just because they were made long ago? Or am I just getting old?

Most web programmers have always had the attention span of a goldf-- ooh shiny thing!

Bullshit. The fundamental difference between email and telegraphs are that you don't have to "go down to the email office" (and furthermore, is more decentralized).

You know what is otherwise nearly perfectly analogous to telegraphs? Texting. That stuff is popular as fuck.

Well, I wish these ambitions that people have for the web -- like providing entrepreneurs and engineers a way to deliver apps without relying on Microsoft or another platform owner and like providing designers and artists a medium for artistic expression -- did not make writing and reading simple text documents a confusing experience that can go wrong in dozens of ways.

And does anyone besides me find it ironic that the term 'thin client' is sometimes applied to this hairball over ever-increasing complexity we call the web browser?

You are right, the web is diverging into pages and applications, unfortunately some are using applications technology to deliver pages. Yet others keep trying to force page development patterns and practices on applications. The progressive enhancements one is the worst of all the sins of page developers writing apps. It kills the model and spikes the complexity.

over ever-increasing complexity we call the web browser?

Much of the complexity at least from a development standpoint is when people try to sprinkle in Ajax. They are two very distinct development patterns and mixing them creates a maintenance monster. If one is providing pages they are better off, grabbing a CMS and delivering pages, if one is building an app then one is better off throwing away JSP, ASP or PHP and writing all of the UI logic in client side JS that communicates to the server via REST and gets data that the UI is then responsible for displaying. Anything in between those two becomes a maintenance monster.

Even not considering that HTML5 History API, is it hard to just consider "/1234/spam" and "/#!1234/spam" synonymous and use the latter when you navigate with JavaScript enabled?


1. You visit "/1234/spam", and get served full HTML page with some (optional) JavaScript for progressive enhancement.

2. You click on "/4321/ham" link, but JavaScript hooks it up and replaces URI with "/#!4321/ham" (if your browser don't support HTML5 history API, of course). Yes, there is one full-page reload.

2.1. (Alternative) Or - even better - you can be redirected to "/#!1234/spam" on step 1, so you won't notice the glitch on your first click.

3. You continue to navigate with AJAX, now without reloads. You can bookmark pages and so on (and if you somehow happen to lose JavaScript support you could just remove "#!" to get a valid URI).

Very simplified implementation cost:

1. `$("a[href]").attr("href", function() { return this.href.replace(/^\/(?!#)/, "/#!"); });` on every page

2. `if (location.pathname === "/" and location.hash.match(/^\/#!/) { $("#content").load(location.hash.replace(/^#!/, "/") + "?format=content-only"); ... }` on / page.

3. Ability to return undecorated ("article-only") content on `/1234/spam?format=content-only` requests.

You just described exactly what you're _supposed_ to do in a progressively-enhanced single-page Javascript application. Build the existing site first without a single line of js, develop the hash-bang/HTML5 history syntax on top of it, and replace the links if javascript exists.

The Gawker family of sites failed at doing all of this.

I was building an app that used pushState/replaceState recently instead of the hash bang syntax, but getting content navigation isn't easy. For example, when attaching the event handler to the links, you have to be careful not to make it trigger on middle or right clicks and you have to keep track of the scroll state so going back keeps the same behavior and also to take a copy of the previous selected text. Without all of that, intra-page navigation feels unnatural and uncanny.

Good points, but it seems some of that (scrolling, selection, form data, etc.) should be left to the browsers to fix/implement/deal with. Granted, it is a really bad user experience.

Seems like this should be made a JS library in the meantime. (Hint, hint. =])

HTML5 pushState/replaceState is the "correct" way to do this for compliant browsers, though... right?

We had a similar problem with our site, and we solved the issue by using <a id="linkID" href="pathToContent">title</a> links with a jQuery event handler that prevents the default behavior of the link.

With this implementation, when the link is clicked the appropriate content is served through an Ajax call, and the crawlers are able to index the content.

You can see it in action here http://lynkly.com

Yeah I found the jquery solution works best. Th href is for the crawler and for your website you override the onclick.

good one @hybrid11

I like the smooth two-column layout and flow. It's very high class.

Newer browsers ( http://caniuse.com/#search=history ) can manipulate the url without needing the #! hack.

To be specific, it's just the # hack, or fragment hack if you like. The bang leading the fragment is a Google-specific thing to help the Googlebot crawl your content.


<strike>No, timb was referring to HTML5 history api. It makes possible to alter the path component of a URL by javascript, eliminating the need for # hack. Try browsing repositories on github with WebKit-based browser to see a real world example.</strike> (sorry for my misreading)

And raganwald was referring to the '#!' hack, since it's really just the '#' hack. He wasn't addressing the history api.

However all those IE users out there would not have that API. I remembered that facebook's starting to use that API already.

The solution with "mangled" URL seems to force a reload on every new request to lifehacker as well. Just like what Facebook did with the hash-reload.

Thanks for the link. I had not realized that pushState would allow for URL address change w/o reloading the page. Unfortunately the support is far from ubiquitous (Firefox support requires the current beta).

I think what the author of this diatribe fails to understand is that the separation between the part of a site that uses client side navigation, vs the part that uses sever side navigation is an architectural design choice - not an aesthetic choice of how a developer wants their URLs to look.

Client side navigation has performance benefits, ability to use web apps offline, etc. These are all valid reasons to choose to use it. It may not be ideal that the end user is exposed to that detail by breaking the URL up between the "server part" and "client part" separated by the # character. But that's where we are today in browser design.

I don't buy the argument that the # scheme makes a web site more fragile. You can break the server side of your app just as easily as you can break the client side.

The more complex your site gets, the harder making /foo/bar => #!/foo/bar dance becomes. This is hampered by the fact that, when you bring these issues up with business folks, it'll probably go something like this (from experience):

1) How many of our users will be affected? 2) How much harder (i.e. how much longer) will it be to do it right? 3) Don't we depend on JS to inject ads anyway? Ads are the whole reason anybody's paying for this...

When you truthfully answer #1, if #2 is more than about 5 minutes, nobody's going to budget for it.

Overall, the hash-as-URL argument is hampered by the fact that Gawker is a prime example of going single page just because you can. Other than fancy page->page transitions, I don't see what their new setup buys them.

For the time being, I will personally continue to build #!-only websites, designed exclusively for javascript enabled browsers. Maintaining two versions (along the guidelines of progressive enhancement) is just too much work (maintaining and testing html view templates as well as jQuery or moustache templates) considering that so few people lack javascript. I wouldn't let a release go live without running a selenium suite across it in any case. My perspective would be different, I imagine, if I worked on a large team that could 'afford' to take the progressive enhancement route.

The point of progressive enhancement is specifically to not maintain two versions. You have just one version of your code for all user agents, and then you enhance it for agents that have better features.

Now worrying about having to build your templates twice -- once in your back end language, and then in JavaScript -- is a valid concern. Code duplication is never good. However, mustache is supported by many many languages (http://mustache.github.com/) which means that you really can build your templates just once and call out to very similar code to populate them with data.

Yes, you would need to run tests against the site with and without JavaScript (or CSS or XHR or any other feature that you were using to progressively enhance your page, but how far you go is really up to you), but if that's automated, it isn't much effort on your part.

I'll be the first to admit that it's fine to cheat in the code to give your users a better experience, but I don't think it's a good idea purely to take development short cuts or cut costs. If our roads were built that way (and in India they are), you'd end up with potholes every six months (and in India we do).

>> I will personally continue to build #!-only websites, designed exclusively for javascript enabled browsers

We as web developers just spent the last 10 years as slaves to IE6 and saying "oh how I wish developers would/could develop for standards, not just with platform X in mind". And here we are again, setting ourselves up for an even worse situation than the IE6 problems. We have sites designed with only (iphone | IE6 | javascript-capable) in mind. The reason of "it's too much work" is the same answers given time and time again to the sites only designed for platform X, but that's not a good reason to do something when the platform you're delivering it on is by definition a network of variable capability platforms.

It's fine if we want to try to push the state of the art of rich internet apps, but at what point do we stand back and realize that we're not building a website (a collection of HTML documents on the world wide web) but rather delivering a javascript executable to our user that just happens to be renderable in most modern web browsers?

I don't mean to single you out, it's something that we all having to deal with, but is there anyone at the standards organizations that are listening to the pulse of the new web? If people want to deliver applications to users via a URI, why do we have to include all of the extra baggage of HTTP/HTML/CSS/Javascript?

If we as the artists of the web are going to break the implied contract of what the WWW is, we should at least be honest with ourselves and work toward a real solution rather than trying to staple on yet another update to HTML to try its hardest to pretend to be its big brother Mr. Desktop App.

There's a thing called "usability". A web application may need javascript; a website presenting documents and information has no excuse not to work in pure HTML.

Typical crap : you can't access nvidia.com anymore with lynx/links. When your goddam nvidia proprietary driver doesn't play well with your kernel upgrade, you have no way anymore to download a new one without running X11, though it used to work not too long ago.

I take almost the polar opposite approach to you, always providing a safe fallback to plain ol' HTML. I feel its a defensive style and doesn't provide a single point of failure. I run Noscript since it makes most pages load much faster. No offense, but I don't want to run your code, I'd rather read your content.

Fair. Would you feel the same way about a web application, as opposed to a content-heavy site?

Are there still content heavy sites? Not to be facetious here but it seems user demand is aligned with smaller and smaller, more and more active bits of content served by web applications. At some point you may have a lot of content, howev it's not so much heavy as it is highly interactive.

Can you clarify something - what benefit does the #! syntax give your site specifically? The reason for using this seems to be misunderstood by the original article.

The #! syntax alerts google's crawler that the site supports their ajax crawling scheme [1].

The fragment (part after the hash), in general, is used to denote a specific state (or "page") in a single page ajax site.

[1] http://code.google.com/web/ajaxcrawling/docs/getting-started...

Are you trolling or just a lazy jackass?

Not trolling. But I admit that I'm overstating my position a little bit to see if any interesting discussion comes of it. The reality of my current situation (a one, sometimes two person team) has meant that I have indeed been following the convention I just described. That's not to say I want to, but as I mentioned, I will probably continue to for the foreseeable future without too much hesitation.

Note the account's name, 'TrollBait'. Brand new too. If he keeps those kinds of comments up, he'll be banned in no time.

On the other hand, I'm impressed how you just ignored TrollBait's insult and just replied in an absolutely normal manner.

Proposal: extend Transfer-Encoding to allow gzip encoding of multiple responses in a persistent connection as if they were a single stream. This way, a browser which expects to request multiple pages from a site can keep the connection open, and the repeated content on each page will be gzipped into oblivion.

The problems I see is that this would require both server and browser support, and that leaving persistent connections open for minutes could be problematic.

Doesn't SSL do compression? (Most good encryption systems do, because compression strips out the redundant data that codebreakers need.) If so, you should be able to do persistent connections over HTTPS, and get the whole stream compressed.

Chuckle ... Clean URL blogger doesn't fixup his own URLs!

It looks like URL of that article is case-insensitive and page does not specify canonical URL.





render the same content, and <link rel="canonical"> does not exist in source code of that page.

Hash bangs obviously aren't the only way to break web pages; has anyone else noticed that the current Google homepage has broken the keyboard shortcuts, at least in Firefox? You can't (or at least I can't) access the menu with the Alt key now.

WFM. (Firefox, 64-bit Linux nightly build.)

Did they do any sort of beta testing on this before they rolled it out?

Please. I for one am using the fragment hack to build a richer user experience on a widely available platform. The interactions I am building wouldn't be possible without it (or the HTML 5 pushState replaceState equivalents). Rendering my content on the client in JavaScript makes me a lot more productive, and the interactions I facilitate on my site would be impossible without it. The implementation choices of a single web site cannot "break the web". I trust that the author of this post has a lot of experience and also valuable things to say about building accessible sites, but accessibility and graceful degradation aren't the only god in my pantheon.

Even worse, they dont appropriately use the canonical meta tag.

In the first place, y should a news publishing site even use hash-bang url? U r not a web app built on ajax

>Being dependent on perfect JavaScript

You're almost always dependent on perfect code to keep your site running, be it server side or client side. If code breaks on the server side you're just as screwed.

Server side code runs on one machine. Client side needs to run on many OS/Browser/Plugins combo.

The key difference is that the JavaScript approach makes you dependent upon third party ads being served correctly, while the standard HTML approach makes bad code in third party ads just affect the display of those ads.

That third party dependency is a significant difference.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact