

Hashbang URIs Aren't as Bad as You Think - reynolds
http://mtrpcic.net/2011/02/fragment-uris-theyre-not-as-bad-as-you-think-really/

======
krakensden
> Now, take away the constant loading of the JavaScript and CSS, and you've
> significantly decreased bandwidth and server load

On a related note, if this is a problem for you, now might be a good time to
check your Expires headers. The 'some clients turn off caching' seems like a
bit of a non sequitur too- some clients also have javascript disabled. Some
clients are IE6. How many clients are like that in the wild? How many of
/your/ users?

~~~
natrius
The caching justification felt very contrived to me. I'd guess that the
percentage of users with broken cache handling is lower than the percentage of
users with javascript disabled. I'd rather not guess, but the author didn't
provide any actual evidence that this is a problem.

If your cache headers are set incorrectly, you should probably fix that
instead of reworking your entire site.

~~~
mtrpcic
It's true that the caching argument seemed contrived (and was, in a sense), so
I've added an edit to it to take your comments, as well as others, into
consideration, and to make them known to other readers. Thanks for the
feedback.

------
perlgeek

      > If I have JavaScript enabled, and send my friend (who has
      > JavaScript disabled) a "Hashbang" URI, he won't load the
      > content!
    
      > You're right.  This is the primary drawback of an
      > approach like this, and you'll need to figure out if
      > that's acceptable for you.
    

Simple answer: "no". It's not acceptable for me in any way. Not as a user, who
disables JS by default (for speed, security and annoyance avoidance), and not
as a site own either.

So, these URLs are still as bad as I think.

Do you know the old joke about Java where people can't by hammers anymore, but
need to buy a HammerFactory to produce hammers for them (and then
HammerFactoryFactory etc.). Requesting a page that then requests the right
content from the server just feels like an unnecessary Factory step.

~~~
mtrpcic
That's why this approach isn't for everyone. To each his own. One thing to
keep in mind, however, is that the approach is meant more for projects that
can be thought of as "web applications", rather than websites; things that
would require JavaScript to be enabled to even work.

Also, another way of thinking of it is to stop thinking of the browser as a
browser. If you do the hashbang thing properly, you're now treating your
browser as an API consumption service, and treating it as you would any other
application. This isn't necessarily a bad thing, but does have drawbacks.

~~~
andybak
The problem is that - as Gawker have shown - people start thinking that
techniques that might be acceptable for web apps are acceptable for content
sites.

They don't think this through as they don't use anything other than a modern
web browser with javascript switched on.

------
andolanra
I'm not a fan of the hash-bang URLs, but I think this article largely misses
the objections I (and many others) have to them. Nothing that he said is
unique to hash- _bang_ URLs; the same thing could be done with typical
fragment identifiers, and indeed has been for ages. My objection to hash-bang
urls _in particular_ has nothing to do with using fragments to dynamically
load content, which I think is a great idea. If you write your code like the
article suggests, you'd be fine by me: he's taking a page which can be
accessed by a non-js enabled browser and using js to modify the links so they
load content dynamically. I can still look at that page with, say, wget or
lynx and nothing would break. The Gawker redesign—and the whole hashbang
scheme—requires any agent that doesn't execute js to _mangle the URL_ in order
to get at the static content. And that's the issue here— _forcing non-
compliant agents to change_ in order to do what they've always done runs in
the face of the Robustness Principle. Hash-bang URLs are an non-elegant
solution to a minor problem; use normal fragment identifiers for your Ajax
instead.

~~~
Isofarro
"And that's the issue here—forcing non-compliant agents to change in order to
do what they've always done runs in the face of the Robustness Principle."

You nailed it. Hashbang URLs are not backwards compatible with the existing
World Wide Web. And existing tools that use the Web today, cannot in their
current state use these hashbang frameworked sites in the same way.

As another commenter noted, hashbangs silo a site within a non-standard
requirement. That approach explains why posts such as these overlook or ignore
fundamental behaviour of the Web and how it brings together fragments of
distributed conversation.

Hashbang URLs break the World Wide Web stack at the HTTP and URL levels, and
attempt to fix the damage at the behaviour level. The fix is inferior, and
results in breakage of the World Wide Web model (like framesets).

It's a technical implementation of a walled-garden. Walled off from the
exisiting World Wide Web.

------
emef
Large/lots of javascript files can slow down the loading of the page, but that
wasn't the real issue people were taking with JS-controlled sites (as I
interpreted things).

The real delay cost was from the inevitable redirect to the root of the
website. When you visit domain.com/user-name, a JS-controlled site will
usually redirect you to domain.com#!/user-name, and then loading up the page.
There is no avoiding this unless you want really ugly urls: domain.com/user-
name#!/user-name...

So large JS files might increase the load speed slightly, but adding an entire
redirect is the kicker.

Like most things, there's a time and a place. Using hashbang urls can increase
response-time when navigating through a site and provide a really cool
experience. On the other hand, it definitely doesn't work for all browsers and
users.

~~~
btipling
Such a redirect isn't slow? It's near instant if done right. Here's twitter
doing it: <http://twitter.com/bjorntipling>

It's absolutely super fast for me. How is that a 'kicker'?

Large JS files only need to be downloaded once and are cached.

There's no problem here.

~~~
emef
> Such a redirect isn't slow? It's near instant if done right. Here's twitter
> doing it: <http://twitter.com/bjorntipling>

On slower devices (such as mobile phones) redirecting can be more noticeable.
You're right though, in general it's not really a problem, but might be more
so than a larger-than-normal JS file.

> Large JS files only need to be downloaded once and are cached.

Yeah, I was trying to say that. Larger JS files aren't a problem at all; load
them once and they're cached.

------
jonursenbach
Shebang. It's called a shebang.

~~~
BerislavLopac
"a shebang (also called a hashbang)" --
<http://en.wikipedia.org/wiki/Shebang_%28Unix%29>

------
farlington
Hear hear.

I can think of many cases where ajax loaded content and the attendant hashbang
URLs are far preferable, like with twitter's web interface or with gmail. It
just makes sense for web applications to work differently than static content,
and persistent display of information often beats out clean looking URIs.

Plus the whole anti-hashbang thing has a reactionary air to it.

~~~
prodigal_erik
"Web app" is a misnomer. If the content isn't browsable hypertext, it has
abandoned the Web and stepped backwards into the ghetto of siloed
client/server apps that were deservedly hated in the 90s. And the industry has
yet to deliver a trustworthy js sandbox that can safely run any code it
happens to find anywhere—the majority uses the defaults because they don't
know how reckless those defaults are.

~~~
farlington
Thanks for this response, it's really thought provoking. I have a few
questions:

'The ghetto of siloed client/server apps'? Would those be like ActiveX
controls and Java applets? Isn't JavaScript fundamentally different?

Does the definition of 'browsable hypertext' preclude hypertext that's
scripted to operate differently, e.g. 'ajax'? Are you not still 'browsing
hypertext'?

The industry has yet to deliver a trustworthy js sandbox—should browsers not
support JavaScript?

~~~
prodigal_erik
By siloed I was referring to all the VB-style apps that predated widespread
use of the Web. You had to use a single mediocre client app because it was the
only piece of code in existence that could support the proprietary protocol
for the matching server. Lock-in was rampant and building a better client or
repurposing the data in any way was almost impossible.

Now we have servers that may technically still be talking XML or JSON or
something over HTTP, but it might as well be an opaque proprietary protocol,
because there's only one piece of code in existence (the javascript embedded
in some page) that knows how to send meaningful requests to the server or
decode its responses. The protocol isn't even stable enough to reverse-
engineer because the author can make arbitrary changes to it and migrate
everyone to an updated version of their client code at any moment. I find this
vastly inferior to query strings, multipart/form-data, and scrapable semantic
HTML, which a growing number of web devs completely neglect (none of whom I'd
ever hire).

> should browsers not support JavaScript?

They shouldn't run it by default without asking whether the user trusts the
author. Privacy violations are rampant and even malicious scripts have become
a recurring problem. I don't see why a sandbox that works shouldn't be
possible, but it hasn't happened yet.

------
skybrian
I wonder if anyone has tried this: suppose that to make page transitions
faster, you strip out everything from each page other than the data itself?
The page should just contain the main content (which would have to be fetched
via AJAX anyway), and JavaScript can fill in things like sidebars and
navigation.

~~~
andymism
IIRC, Posterous does this as part of their caching strategy. Static content is
HTTP cached and dynamic content is pulled in via AJAX on page load. The result
is that the primary content is available fast and dynamic secondary
information (such as view count, etc) pops in soon after.

~~~
andybak
Jolly good. A site that understands the concept of 'progressive enhancement'.

------
jmah
A better approach in all ways is to use the HTML5 history API, a la GitHub.

<https://github.com/blog/760-the-tree-slider>

~~~
mtrpcic
The article makes mention that the example doesn't use the HTML5 History API,
and that the onus is on the developer to check for and use that as
appropriate.

------
zalew
I only wonder why websites who use it themselves, break other shebang urls
when you post them - try sharing a twitter url on facebook (always have to
strip the #!/ manually).

~~~
ptarjan
I'm an engineer at Facebook.

We started using Google's ajax crawling spec. Whenever we see:

    
    
      http://twitter.com/#!/ptarjan
    

we actually crawl

    
    
      http://twitter.com/?_escaped_fragment_=/ptarjan
    

Let me know if you notice any issues.

~~~
zalew
Oh, now it works and links fine! Tx for the update

//edit: btw, do you work with the new messages? I got an annoying bug there -
as you type, often the cursor jumps to the end. annoying as hell when you edit
sth in the middle of the sentence.

------
Charuru
Much of the speed increase (assuming fast and properly cached server) can come
from the decreased need to repaint the entire DOM. For my site, this is a big
deal.

