
JavaScript and URLs - icey
http://shiflett.org/blog/2011/feb/javascript-and-urls
======
udp
Using a # to show JS state changes in the URL isn't some huge developer cock-
up like it's being portrayed to be, it's just making the best user experience
out of the functionality available - see this for instance:
<http://code.google.com/p/reallysimplehistory/>

HTML5 provides the new history.push system, which provides all the
functionality hashbangs are being used for without the hashbangs:
[http://www.kylescholz.com/blog/2010/04/html5_history_is_the_...](http://www.kylescholz.com/blog/2010/04/html5_history_is_the_future.html).
The fact that a new feature has been added for it goes to show that it's
something necessary - Facebook, for instance, uses history.push if present,
and falls back on hashbangs when it isn't.

It's often a very important thing to be able to do, else if you have (for
example) multiple tabs or other frontend navigation for your web-app, pressing
back would have to take you right out of the application, rather than back to
the last place you were.

~~~
T-hawk
It seems that much of this consternation about munged URLs (here and in other
threads) is about preserving the functionality of the "back" button.

Why do we need a back button?

The DOS and Unix command lines never had a back button. Lotus 1-2-3 and
WordStar and Word and Excel never had a back button. Photoshop and GIMP never
had a back button. SQL scripts and database servers never had a back button.

Many of these programs do have application-specific implementations of an
"undo" function. Perhaps that is the way to go for future software, rather
than trying to wedge a universal "back" function into applications that don't
really map to that paradigm? What does "back" even mean when the application
is a game, for example? Flight Simulator and Ultima and Doom never had back
buttons.

~~~
simonw
It's nothing to do with the back button. It's about addressability - being
able to link to things. Links are kind of important on the Web.

~~~
T-hawk
> It's about addressability - being able to link to things. Links are kind of
> important on the Web.

Of course, but not in all applications and situations. What does it mean to
link to the middle of a Farmville game session? It's by definition a transient
state within a process that does not need re-linkability or history.
Similarly, we've been using desktop applications forever with no notion of
addressability. You don't send an image with the expectation that the
recipient can jump into the middle of your editing actions in Photoshop.

When a web application is fundamentally about an interactive process rather
than publishing content, the notion of back and history can lose much meaning.
Using Back as a universal undo button or a URL to bookmark any arbitrary slice
of content does not hold up in the world of more complicated AJAX partial
dynamic page updates. Should designers continue to contort Javascript and the
very notion of URLs to satisfy this increasingly-misplaced user expectation,
or work to educate users that sometimes "Back" or bookmarking cannot have a
meaningful result?

Nobody expects a "back" or "undo" button in everyday activities like driving a
car or cooking or making social conversation. How did this expectation become
so irreversibly fundamental in computers?

~~~
adamesque
_> Nobody expects a "back" or "undo" button in everyday activities like
driving a car or cooking or making social conversation. How did this
expectation become so irreversibly fundamental in computers?_

Wow. We're only _just now_ getting to the point where it _isn't_ possible to
accidentally render your computer unusable with a few stray actions.

You seem to be forgetting just how unforgiving computers were even 8 years
ago, any why iOS devices are so enticing to regular folks.

~~~
bruceboughton
Undo isn't back. They're different concepts. If I send a friend request on
Facebook and then click back, that doesn't cancel the friend request.

Don't conflate the two concepts. Undo good, back... not always required?

------
mythz
I love how all this semantic HTML police are putting the hate on JavaScript.
Comparing it to Flash is a gross misrepresentation. If flash was a superior
technology that easily allowed you to create a superior user experience that
behaved like browser applications ought to it wouldn't be dying away. It
doesn't but JavaScript and Skilled js devs can!

Why should we be forced NOT to use a technology that provides superior
response and productivity than what was traditionally possible? Forcing us to
live in a world with server round trips for everything just so we can say we
perfectly fit within this _artificial restriction_ as defined 20 years ago
before any consideration was given to the Internet being used for more than
viewing anything but linked documents?

There still exists the 'content web' suitable for browsing and navigating
content! For the Wikepedia and the plethora of blogs carrying content that
remains as business as usual and has no need to change.

There are other apps like that provide enhanced functionality like gmail,
google maps / docs / calendar, Zoho Writer, groove shark, etc delivering a far
superior experience to anyone with an internet connection and a modern browser
better than any set of hyper-linked documents ever can.

~~~
amadiver
> It doesn't but JavaScript and Skilled js devs can!

You might be overestimating the differences between Flash and JS here --
especially when you add HTML5 into the mix. What's to stop a skilled JS dev
from playing <audio> when the user hits a page, burning processors with a few
autoplaying <videos>, after forcing the user to sit through an interminable,
animated <js/css3/svg> 'loading' screen? At that point, it's up to our own
good tastes to decide how to wield that kind of power.

~~~
mythz
I don't understand what your saying. What if JavaScript devs use HTML5 tools
to purposely create poor experience as witnessed in Flash apps?

Then their site doesn't get return visitors and starts dying, just like Flash
is.

~~~
amadiver
I'm saying kind of the opposite -- JS/HTML/CSS and Flash are now more similar
than they are different. It's not the platform that's the problem. It's what
you can do with (and how you can abuse) the platform.

------
spiralganglion
So, where is the solution that gives you pretty urls with HTML5 history for
those who can have it, and identical urls preceeded with hashes for those who
can't, in an easy-to-use plugin? Also, where is the framework/pattern that
makes both styles of URL work in the absence of either technology using
something like mod_rewrite?

In other words, how do we get it _all_ , future-proofed, without so much
frustration and so many debates?

~~~
simonw
That's a nasty solution though, because then every item on your site ends up
with two URLs - a concrete, proper, doesn't-break-the-web one and a lousy Ajax
hashbang one.

Having one URL per piece of content is especially important in this age of
sharing and aggregation, where you don't want your pagerank / number-of-shares
/ other metrics spread equally across two different URLs.

~~~
djacobs
That's what canonical URLs are for.

[http://googlewebmastercentral.blogspot.com/2009/02/specify-y...](http://googlewebmastercentral.blogspot.com/2009/02/specify-
your-canonical.html)

------
jarin
Maybe the real problem is that people aren't making a distinction between
pulling down content and pulling down content +
markup/presentation/interaction.

If you're looking to parse information from a remote resource, you should be
pulling it down in JSON or XML anyway, so as long as your application provides
data in both human-usable (i.e. Javascripty) format and machine-readable (i.e.
RESTful JSON/XML) format, both consumer groups should be happy, right?

I guess one of the problems is that Google only pulls down content + markup.
It would be nice if there was a way to provide machine-optimized content to
crawlers for indexing, basically telling the crawler "here's the data in XML
format, and here's a link to the data in human-interactable format". It's sort
of what Google's escaped_fragment URLs do, but escaped_fragment still expects
content + markup.

~~~
perlgeek
I guess a huge problem with exposing machine-optimized content to crawlers is
that there are big chances that what the user sees in the end differs from
what the crawler sees.

That doesn't have to be by malicious intent, it can be enough that a backend
developer adds some fields, and the template doesn't use them.

On the other hand the machine-exposed data misses headlines that are usually
present on all web pages, so that if the user searches for <company name> \+
<keyword>, special care must be taken that the <company name> part doesn't
fail.

------
mattmanser
Anyone saying that Postel's law is one of the reasons that the web is robust
needs their head examining. It was a terrible idea that has been plaguing
browsers for a long time.

~~~
simonw
There are no exceptions to Postel's Law :
<http://diveintomark.org/archives/2004/01/08/postels-law>

~~~
drdaeman
There are limits to what generousity should be, though.

------
mrduncan
Another relevant blog post on the importance of URL design:
<http://warpspire.com/posts/url-design/>

------
danenania
The amount of importance placed on this concept of The Web is strange. It's
like a religion.

Any developer working on any application has to make decisions and trade-offs
about platforms, user experience, accessibility, and a hundred other things.
If something else is more important than making the back button work or having
pretty urls, then who cares? Focus on what improves your app the most, not
adhering to tradition. It breaks the web? Get real, it doesn't break anything.
It just has other priorities. The only reasonable barometers for whether
something works are user adoption and satisfaction.

~~~
naich
I totally agree. We are seeing the emergence of completely new uses for web
pages, i.e. using a web page as an interactive tool rather than a content
delivery system. At the moment the infrastructure just isn't there to provide
proper support so we are having to work round it, sometimes with fairly nasty
cludges. Until the infrastructure catches up with the change in use, there
will have to be compromises. Saying these compromises are intolerable problems
that should be fixed is unhelpful and missing the wider picture.

------
mwsherman
Perhaps this is too obvious to state, but progressive enhancement obviates
most of this. Any linkable state (that someone might plausibly want to link
to), should be addressable via a regular old URL. Add a layer of JS to
intercept clicks and serve ajax content, with the state expressed in the hash.

The hash should look suspiciously like the fragment that non-ajax agents would
see.

I say this not as a dogmatic _should_ , but because I find separating concerns
makes it easier to develop, and preserves web-ness. Ajax is an enhancement,
not an architecture.

------
stickfigure
The author seems oblivious to the fact that there are now many different kinds
of applications on the web.

If every website was a blog, then sure - addressability, archiveability,
searchability are paramount. If you're building the control panel for a
nuclear power plant, you have a completely different set of concerns.

~~~
Isofarro
Perhaps using the Web isn't the appropriate solution for a nuclear power plant
control panel. There are other, more suitable, internet protocols to use for
this sort of appliance than the Web (if indeed, making a nuclear power plant
control panel accessible over the Internet is an actual requirement).

~~~
tptacek
Perhaps using The Web isn't appropriate for SCADA controls, but what's the
reason why using a web stack is inappropriate? People here seem to forget that
things like J2EE and ASP.NET have displaced decades of crappy console and
Windows Forms applications.

The moment you accept that there is The Web and there are web apps and they're
not the same thing, you're right in the middle of the debate Shiflett is
talking about. The down-with-hashbang crowd doesn't acknowledge any such
difference.

------
ww520
I can understand people's frustration when the hashbang feature breaks the URL
function in the browsers, but it can work nicely if done right. I've built one
of my apps using hashbang to encode the browsing state, and had gone to great
length to ensure bookmarking and back button work correctly in the browsers.

It works great at the end. Every URL points to a specific page and still gets
the Javascript interactivity. Some sample URLs.

<http://www.previouslook.com/hnews/new#!2011-03-01-15-15.22>

<http://www.previouslook.com/hnews/new#!2011-03-01-12-15.15>

<http://www.previouslook.com/hnews/new#!2011-03-01-12-45.12>

~~~
wmf
If you've gone to great length already, then maybe it isn't much additional
work to support the HTML5 history API and also provide server-side rendering
to non-JS clients.

(BTW, that's a pretty cool site.)

~~~
mushtar
It's impossible to provide a server-side fix, since the URL fragment (the part
after the hash mark) does not get sent to the server.

------
moblivu
We need a complete rewrite of the current web languages. We are struggling
with DIVs and positions and more DIVs to obtain a result the coding language
weren't first developed to do. I'm mostly pointing at HTML but JavaScript
could take a shot at it too.

------
r0s
I learned some mod_rewrite tricks for a personal project, only to discover
many of my peers consider useful URLs to be unattainable magic!

I think it's a disservice to common users of any site when the address becomes
an opaque mess of seemingly random characters.

------
balupton
The hashbang is a big gory mess and glorified for all the wrong reasons.
Mostly it brakes best practices and destroys the chance for graceful
upgradation and degradation.

What is truly amazing is that there actually are other solutions that still
offer the best practices like graceful upgradation and degradation with less
effort. This article goes into the issues and the alternatives:
[https://github.com/balupton/history.js/wiki/Intelligent-
Stat...](https://github.com/balupton/history.js/wiki/Intelligent-State-
Handling)

------
tahu
I agree that JavaScript is breaking the traditional web - linked static HTMLs
with some forms in it. But I also believe that one day people will start to
forget this bumpy ride. I am thinking of <http://nodejs.org/> and
<http://jsdom.org/> \- if the user agent has javascript disabled - the client
side javascript code could be executed on the server.

~~~
jacques_chester
In hashbang schemes the client-side javascript is being used to prevent page
loads and to reduce server-side rendering (though this second argument strikes
me as a red herring).

------
moron4hire
" This is what we call “tight coupling” and I thought that anyone with a
Computer Science degree ought to have been taught to avoid it."

Unfortunately, there are two reasons why this assumption falls apart. First,
there are a lot of non-Computer Scientists out there writing software. Second,
the push for everyone-gets-a-degree credentialing and the cries of "we're
short on qualified candidates" in industry have led to a serious erosion of
standards in universities; either the topics aren't being taught or students
are passed without the lesson being absorbed.

On the first point, I think this is a good and a bad thing. It's good because
it's democratic. It means that more people are finding their own way, it means
that systems are getting more intuitive, it means that barriers to entry are
being torn down. Literacy is good for everyone. But it's also bad because the
actually trained Computer Scientists and Software Engineers, aren't focusing
on keeping this process going. Instead, they just want to open consulting
firms and roll out yet another CRUD business intelligence Web app. Concerning,
to say the least.

On the second point, there is a serious problem with the number of people we
are graduating through universities. And I don't care what the top
universities are doing, there are 50 more low-end state schools and technical
schools churning out people to each one of them. I know at my university, I
received education in such topics of Software Engineering. BUT--and this is a
big but--it was all but unrequired knowledge. I can't say any of it was truly
"required", because I ended up working with quite a few of my classmates
several years later, and they had forgotten it all. They stood in awe of my
prowess as a programmer, when all I was doing was using the same lessons we
had all received. If you were good enough at programming to avoid basic syntax
errors, you were good enough to get through to the end with a barely passing
grade. And few hiring managers care about grades.

These people that I graduated with, they have benefited from the
democratization of programming as well. We should make this relationship more
explicit. Keep Computer Science pure and only graduate people who are up to
the chops. Encourage people who want to write websites to go in to Graphic
Design. Make programming on equal terms as algebra and creative writing. We
all have to study it, we don't all have to be experts, but the people studying
the subject specifically had better be held to the highest standards.

~~~
Lukeas14
As a web developer I believe the problem and the bottleneck has and always
will be the browsers. If there was an easy way to give our users what they
want (flashy, ajaxy webapps) without breaking the web I'm sure developers
would be all for it. But for now it is viewed as a time intensive, value add
to do these properly. Web developers will always be a step ahead of the
browsers and therefore have to resort to breaking the web in order to achieve
the functionality their users/managers pay them to create.

The only way this problem will be resolved is for more of your "true" computer
scientists to start working on browsers. However, at the moment, many of them
are making too much money at their consulting firms fixing the websites
created by graphic designers.

~~~
moron4hire
Users don't want hashbangs. They want faster loading content. Hashbangs are a
kludge to get it. This is just one example of the issues that we're running up
against with trying to extend HTTP, HTML, and browser technology into an
application platform. It's just not suited.

HTML 5 is the right answer to the wrong problem. The problem is not how to
make HTML more expressive and more powerful. The problem is how to serve
applications to users. We need an open standard for WebStart/ClickOnce, not a
hack on top of a hack on top of HTML to make web pages appear like
applications (with every web designer's own notions as to how to render UI
elements and display notifications and control flow, etc.).

Next time you see a JQuery modal dialog, ask yourself, "why did someone on the
client level have to write this code? Why wasn't it part of the platform?" Why
do we have competing implementations of modal dialogs on web apps, in today's
day and age? And it's often represented by a severely nested DIV in a section
of code that has no meaning to the modal itself. Semantic markup, for sure.

------
js4all
Good point. I think it should be mandatary for every site to offer content
that degrades to an useable version when used without JavaScript.

