Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript and URLs (shiflett.org)
134 points by icey on March 1, 2011 | hide | past | favorite | 69 comments



Using a # to show JS state changes in the URL isn't some huge developer cock-up like it's being portrayed to be, it's just making the best user experience out of the functionality available - see this for instance: http://code.google.com/p/reallysimplehistory/

HTML5 provides the new history.push system, which provides all the functionality hashbangs are being used for without the hashbangs: http://www.kylescholz.com/blog/2010/04/html5_history_is_the_.... The fact that a new feature has been added for it goes to show that it's something necessary - Facebook, for instance, uses history.push if present, and falls back on hashbangs when it isn't.

It's often a very important thing to be able to do, else if you have (for example) multiple tabs or other frontend navigation for your web-app, pressing back would have to take you right out of the application, rather than back to the last place you were.


It seems that much of this consternation about munged URLs (here and in other threads) is about preserving the functionality of the "back" button.

Why do we need a back button?

The DOS and Unix command lines never had a back button. Lotus 1-2-3 and WordStar and Word and Excel never had a back button. Photoshop and GIMP never had a back button. SQL scripts and database servers never had a back button.

Many of these programs do have application-specific implementations of an "undo" function. Perhaps that is the way to go for future software, rather than trying to wedge a universal "back" function into applications that don't really map to that paradigm? What does "back" even mean when the application is a game, for example? Flight Simulator and Ultima and Doom never had back buttons.


The back button is one of the greatest inventions in the history of usability. It's the centerpiece of the thick user agent, and the user agent is why the web took over the world.

The web is a platform for modal applications with addressable state. To meet the expectations of users, any app on the web must somehow shape itself to that model.

BTW have you ever used a navigation based app that didn't have a back button? Try the native YouTube app on iOS. It's infuriating.


...but the native YouTube app on iOS does have a back button: obviously there's something important about navigation that "stack of pages with a back button" isn't sufficiently describing.


It has a broken back button. The history is erased when you tap any of the bottom menu items.


I spend most of my time on the iOS youtube app in mortal fear of accidentally brushing one of the bottom menu items.


It's nothing to do with the back button. It's about addressability - being able to link to things. Links are kind of important on the Web.


> It's about addressability - being able to link to things. Links are kind of important on the Web.

Of course, but not in all applications and situations. What does it mean to link to the middle of a Farmville game session? It's by definition a transient state within a process that does not need re-linkability or history. Similarly, we've been using desktop applications forever with no notion of addressability. You don't send an image with the expectation that the recipient can jump into the middle of your editing actions in Photoshop.

When a web application is fundamentally about an interactive process rather than publishing content, the notion of back and history can lose much meaning. Using Back as a universal undo button or a URL to bookmark any arbitrary slice of content does not hold up in the world of more complicated AJAX partial dynamic page updates. Should designers continue to contort Javascript and the very notion of URLs to satisfy this increasingly-misplaced user expectation, or work to educate users that sometimes "Back" or bookmarking cannot have a meaningful result?

Nobody expects a "back" or "undo" button in everyday activities like driving a car or cooking or making social conversation. How did this expectation become so irreversibly fundamental in computers?


> Nobody expects a "back" or "undo" button in everyday activities like driving a car or cooking or making social conversation. How did this expectation become so irreversibly fundamental in computers?

Wow. We're only just now getting to the point where it isn't possible to accidentally render your computer unusable with a few stray actions.

You seem to be forgetting just how unforgiving computers were even 8 years ago, any why iOS devices are so enticing to regular folks.


Undo isn't back. They're different concepts. If I send a friend request on Facebook and then click back, that doesn't cancel the friend request.

Don't conflate the two concepts. Undo good, back... not always required?


I think it's the non-linear processes that have trouble with the back button. Clicking back should go to the last modal state. If your app is fundamentally non-modal, then the last state is outside your app and clicking back should exit (after persisting all state, of course).

But most apps can be broken into at least a few states. For any document based app, like Photoshop, navigation should probably move between document views. Any modal dialogs should be closable with the back button. You don't have to use it for everything, it just has to do something sensible and non-destructive at any given time.


    You don't send an image with the expectation that the recipient can jump into the middle of your editing actions in Photoshop.
Actually, that sounds really appealing.


> Nobody expects a "back" or "undo" button in everyday activities like driving a car or cooking or making social conversation. How did this expectation become so irreversibly fundamental in computers?

People make mistakes. All people, in everything they do, at every level of responsibility.

"Back" or "undo" buttons in everyday activities like driving a car or cooking or making social conversation would be astoundingly useful. In your first example alone, literally tens of thousands of lives would be saved every year.

The expectation is irreversibly fundamental in computing because in computing, unlike reality, it is actually possible.


Don't underestimate the value of the back button. In an interlinked software world where you can view your email, read a party invite, add the date to your calendar, and book a flight out there... users need a way to unwind the stack.


It's all about the back button in a web-app when addressability doesn't matter. The user doesn't expect to be logged out when they click back out of force of habit.


So many issues with such a short comment!

* If the back button behaviour is important, perhaps not breaking it in the first place is an appropriate approach

* The addressability of resources is one third of the fundamental basis of the Web. Ignoring it is essentially non-web, so I suggest the label "web-app" is incorrect.

* The user-expectation of clicking the back button but expecting to remain on your one-page website sounds like a usability problem brought about by applying a non-web-like interface inside a web browser. Perhaps a better user-experience is not breaking the well-understood web-navigation model, and thus not be confusing to the user. Or perhaps these "web-apps" need to make it clear to the user that they deliberately don't follow the characteristics of a web resource?


Let's take a look at the alternatives to breaking the back button, then:

- The web application could serve static pages and then update them with AJAX

What if the user clicks back /then/? All the AJAX updates would be lost, and they'd go back to an older version of the page - they'd then see the data on the page "jump" to the updated version.

- The web application could not use AJAX at all, and serve everything as static pages, without automatic updating

Some web-applications can't work like that. Mine has instant messaging, and I want the IM windows to be there when it loads. Besides, most users /like/ AJAX. They're not thinking "oh no, this application ruined the web", they're thinking "nice, the page updated itself".

- The web application could use Flash or some other plugin, instead of the native HTML/Javascript

Which of these are you suggesting developers do instead? Why is the HTML5 history/hashbang URL model not better than all of these?


"Some web-applications can't work like that. Mine has instant messaging, and I want the IM windows to be there when it loads."

There are already internet protocols for real-time notifications, like XMPP and a whole host of non-open protocols. Re-implementing messaging on a platform that isn't by it's nature a push platform will always be inferior to the real thing.

Wouldn't it make more sense to implement a web-like application on-top of a real-time messaging platform instead of the other way around?

(Though, it's not clear what a back button should be doing in an "instant messaging" interface - will it undo the a conversation? Does a user bookmark a message or a conversation so they can edit/refine individual messages later?)

"Which of these are you suggesting developers do instead?"

I'm suggesting developers use the right tools and platforms for the job.


So you're suggesting there shouldn't be any (for example) instant messaging in the web browser, it should be in a separate application.

Well, it's pretty clear what you mean now, and I guess we'll just have to agree to differ. Real-time AJAX applications in the web browser are the future, and I know I'm not the only one who thinks so.

Also, if you think I'm trying to say the back button should do anything to an IM conversation, you're misunderstanding the whole issue.


>it's just making the best user experience out of the functionality available

That's a pretty broad generalization. On Twitter for example I'd say they're intentionally trying to make it harder for you to have a URL to reference an individual tweet.


I love how all this semantic HTML police are putting the hate on JavaScript. Comparing it to Flash is a gross misrepresentation. If flash was a superior technology that easily allowed you to create a superior user experience that behaved like browser applications ought to it wouldn't be dying away. It doesn't but JavaScript and Skilled js devs can!

Why should we be forced NOT to use a technology that provides superior response and productivity than what was traditionally possible? Forcing us to live in a world with server round trips for everything just so we can say we perfectly fit within this artificial restriction as defined 20 years ago before any consideration was given to the Internet being used for more than viewing anything but linked documents?

There still exists the 'content web' suitable for browsing and navigating content! For the Wikepedia and the plethora of blogs carrying content that remains as business as usual and has no need to change.

There are other apps like that provide enhanced functionality like gmail, google maps / docs / calendar, Zoho Writer, groove shark, etc delivering a far superior experience to anyone with an internet connection and a modern browser better than any set of hyper-linked documents ever can.


> It doesn't but JavaScript and Skilled js devs can!

You might be overestimating the differences between Flash and JS here -- especially when you add HTML5 into the mix. What's to stop a skilled JS dev from playing <audio> when the user hits a page, burning processors with a few autoplaying <videos>, after forcing the user to sit through an interminable, animated <js/css3/svg> 'loading' screen? At that point, it's up to our own good tastes to decide how to wield that kind of power.


I don't understand what your saying. What if JavaScript devs use HTML5 tools to purposely create poor experience as witnessed in Flash apps?

Then their site doesn't get return visitors and starts dying, just like Flash is.


I'm saying kind of the opposite -- JS/HTML/CSS and Flash are now more similar than they are different. It's not the platform that's the problem. It's what you can do with (and how you can abuse) the platform.


If you're running Flash content on a page, Flash is going to be running an event loop at the SWFs frame rate, even if it's not actually doing anything. JS, on the other hand, can be totally idle when it's not running any code.

Most JS use is sparing and lightweight and more about interaction than action/animation. Most Flash use is heavy-handed, ham-fisted, and overbearing, done more for cosmetic effect than additional function. Is this a part of the technology? No, but it certainly is nurtured by the differences between the development environments — Flash is still sold as an animation studio! What's more, JS gets to manipulate the DOM directly, giving it a much nicer relationship with the typical web experience than Flash, which is effectively a replacement for the DOM.


>Most JS use is sparing and lightweight

Have you used Twitter or Facebook on resource constrained machines? Even on latest nightly builds of Chromium it's painful on my Dell Mini netbook.

The point is that you can use Javascript do just about everything you can do in Flash. And Adobe CS6 will almost certainly contain some tools to help you use your existing Flash assets in a pure JS+HTML+CSS environment. So the animation studio for Javascript doesn't exist yet, but we're talking about the near future where it will and it will be just as obnoxious as Flash (but much harder to block from consuming CPU.)


The new Twitter really is the worst, and it's not just the Javascript. They've got that hovering topbar with an RGBA shadow and gradient background, which I've personally verified as being a source of much of the slowness on browsers that support these things. You can test it for yourself in any Webkit browser or FireBug — just disable those two CSS properties, and new Twitter feels immediately snappier.

Imagine what Gmail would be like if they went overboard with CSS.


Why should we be forced NOT to use a technology that provides superior response and productivity than what was traditionally possible? Forcing us to live in a world with server round trips for everything just so we can say we perfectly fit within this artificial restriction as defined 20 years ago before any consideration was given to the Internet being used for more than viewing anything but linked documents?

Who said anything about forcing? The article just seems to be asking what people think... has anyone actually talked about forcing developers to step backwards from using JS?


Why should we be forced NOT to use a technology that provides superior response and productivity than what was traditionally possible?

On many websites heavy (ab)use of JavaScript/AJAX has the opposite effect. It makes them less responsive and lowers productivity compared to what it could be if those sites were mostly static. Instead of waiting for one HTTP request to complete, we're waiting for fifty or a hundred and fifty. Instead of waiting for optimized browser code to render HTML, we're waiting for a tangled mess of inefficient and hastily written scripts to do its magic.


I intended to leave this comment on the linked website but unfortunately their mandatory open id is completely broken with the 3 open id accounts I owned.

I always love taking philosophical guidance and development advice from people who can't even get their own shit together.


You don't have to supply OpenId to leave a comment there. It's completely optional.


>20 years ago before any consideration was given to the Internet being used for more than viewing anything but linked documents [emphasis mine].

You mean the web. Do not confuse the web with the internet.


Don't be pedantic.


But this is the key distinction! People are making proprietary protocols (serialized as JSON or SOAP, but the meaning of a request or response is completely ad hoc) and using them to build siloed apps. These appear to be part of the World-Wide Web, because they tunnel over HTTP and TCP. But they actually aren't, because their exposed resources don't have stable URLs or formats and are basically unusable by anything other than one blob of js you have to trust. This should be recognized as tending to displace the web, rather than contributing to it.


I was not being pedantic. I was being subtle about my point that if people wanted to use the internet as an applications-delivery platform they could have defined their own damned internet service instead of making life difficult for those of us who want to use the web for its original purpose of reading and writing.

It makes life difficult for us by making the browser into a huge ball of complexity and churn. (I still remember the days when the people who wanted to use the web as an applications-delivery platform implored people to upgrade to Internet Explorer 5 to make their lives easier, and of course they were not satisfied with IE 5 for long.) The vast majority of the crashes, unresponsiveness and mystifying software behavior on my Linux and OS X systems come from the graphical browser.


So, where is the solution that gives you pretty urls with HTML5 history for those who can have it, and identical urls preceeded with hashes for those who can't, in an easy-to-use plugin? Also, where is the framework/pattern that makes both styles of URL work in the absence of either technology using something like mod_rewrite?

In other words, how do we get it all, future-proofed, without so much frustration and so many debates?


That's a nasty solution though, because then every item on your site ends up with two URLs - a concrete, proper, doesn't-break-the-web one and a lousy Ajax hashbang one.

Having one URL per piece of content is especially important in this age of sharing and aggregation, where you don't want your pagerank / number-of-shares / other metrics spread equally across two different URLs.



And for my current site, those concerns are big. Sharing content is a key focus. The design has as a core philosophy: one simple, highly-readable URL per piece of content.


The shebangs are better of Google's AJAX crawling scheme. If you set it up right, just as Twitter has, Google has no problem crawling the content.

http://code.google.com/web/ajaxcrawling/


Just use the history api and call it an user incentive to upgrade their browser. The site would still work it would just be slower.


And that is exactly what I've done on my most recent site. I see this as the best solution, since the site caters to people who are more likely to have modern browsers that support the History API, giving them rich interactions and leaving only a slim margin of viewers with a "static" version of the site.

But it would be nice to be able to degrade nicely, and still support some degree of ajax with linkable states. Beggars can't always be so choosey, I suppose.


solve it on the server/browser. not every application.

1. browser: get /something; referrer /anotherpage

2. server: here's the diff data only

Done.


Maybe the real problem is that people aren't making a distinction between pulling down content and pulling down content + markup/presentation/interaction.

If you're looking to parse information from a remote resource, you should be pulling it down in JSON or XML anyway, so as long as your application provides data in both human-usable (i.e. Javascripty) format and machine-readable (i.e. RESTful JSON/XML) format, both consumer groups should be happy, right?

I guess one of the problems is that Google only pulls down content + markup. It would be nice if there was a way to provide machine-optimized content to crawlers for indexing, basically telling the crawler "here's the data in XML format, and here's a link to the data in human-interactable format". It's sort of what Google's escaped_fragment URLs do, but escaped_fragment still expects content + markup.


I guess a huge problem with exposing machine-optimized content to crawlers is that there are big chances that what the user sees in the end differs from what the crawler sees.

That doesn't have to be by malicious intent, it can be enough that a backend developer adds some fields, and the template doesn't use them.

On the other hand the machine-exposed data misses headlines that are usually present on all web pages, so that if the user searches for <company name> + <keyword>, special care must be taken that the <company name> part doesn't fail.


Anyone saying that Postel's law is one of the reasons that the web is robust needs their head examining. It was a terrible idea that has been plaguing browsers for a long time.


You are absolutely, categorically wrong; I am certain of this. Your comment is about as wrong a comment as I have ever seen on this site with a positive comment score. Building the web without Postel's law would be like trying to build a world wide web solely out of CORBA and/or DCOM. The costs of specifying, synchronizing and communicating machine formats would have dwarfed all productive work.

But it's academic in any case; any web that anyone tried to engineer without something along the lines of Postel's law would have been outcompeted very early by one that followed it. A strict web would have kept too many marginally qualified people out and would have had much smaller returns owing to lack of scale (Metcalfe's law etc.). Keeping marginally qualified people out means they are strongly discouraged from learning (instead they give up), and that would throttle any possible growth.


There are no exceptions to Postel's Law : http://diveintomark.org/archives/2004/01/08/postels-law


There are limits to what generousity should be, though.


Another relevant blog post on the importance of URL design: http://warpspire.com/posts/url-design/


The amount of importance placed on this concept of The Web is strange. It's like a religion.

Any developer working on any application has to make decisions and trade-offs about platforms, user experience, accessibility, and a hundred other things. If something else is more important than making the back button work or having pretty urls, then who cares? Focus on what improves your app the most, not adhering to tradition. It breaks the web? Get real, it doesn't break anything. It just has other priorities. The only reasonable barometers for whether something works are user adoption and satisfaction.


I totally agree. We are seeing the emergence of completely new uses for web pages, i.e. using a web page as an interactive tool rather than a content delivery system. At the moment the infrastructure just isn't there to provide proper support so we are having to work round it, sometimes with fairly nasty cludges. Until the infrastructure catches up with the change in use, there will have to be compromises. Saying these compromises are intolerable problems that should be fixed is unhelpful and missing the wider picture.


Perhaps this is too obvious to state, but progressive enhancement obviates most of this. Any linkable state (that someone might plausibly want to link to), should be addressable via a regular old URL. Add a layer of JS to intercept clicks and serve ajax content, with the state expressed in the hash.

The hash should look suspiciously like the fragment that non-ajax agents would see.

I say this not as a dogmatic should, but because I find separating concerns makes it easier to develop, and preserves web-ness. Ajax is an enhancement, not an architecture.


The author seems oblivious to the fact that there are now many different kinds of applications on the web.

If every website was a blog, then sure - addressability, archiveability, searchability are paramount. If you're building the control panel for a nuclear power plant, you have a completely different set of concerns.


Perhaps using the Web isn't the appropriate solution for a nuclear power plant control panel. There are other, more suitable, internet protocols to use for this sort of appliance than the Web (if indeed, making a nuclear power plant control panel accessible over the Internet is an actual requirement).


Perhaps using The Web isn't appropriate for SCADA controls, but what's the reason why using a web stack is inappropriate? People here seem to forget that things like J2EE and ASP.NET have displaced decades of crappy console and Windows Forms applications.

The moment you accept that there is The Web and there are web apps and they're not the same thing, you're right in the middle of the debate Shiflett is talking about. The down-with-hashbang crowd doesn't acknowledge any such difference.


There are absolutely nuclear power plants which use web technologies for their control panels.

Specifically, There are at least some which use ASP.NET in their SCADA control systems (although they are not connected to the internet).

I have performed security assessments on several of these.


I can understand people's frustration when the hashbang feature breaks the URL function in the browsers, but it can work nicely if done right. I've built one of my apps using hashbang to encode the browsing state, and had gone to great length to ensure bookmarking and back button work correctly in the browsers.

It works great at the end. Every URL points to a specific page and still gets the Javascript interactivity. Some sample URLs.

http://www.previouslook.com/hnews/new#!2011-03-01-15-15.22

http://www.previouslook.com/hnews/new#!2011-03-01-12-15.15

http://www.previouslook.com/hnews/new#!2011-03-01-12-45.12


If you've gone to great length already, then maybe it isn't much additional work to support the HTML5 history API and also provide server-side rendering to non-JS clients.

(BTW, that's a pretty cool site.)


It's impossible to provide a server-side fix, since the URL fragment (the part after the hash mark) does not get sent to the server.


We need a complete rewrite of the current web languages. We are struggling with DIVs and positions and more DIVs to obtain a result the coding language weren't first developed to do. I'm mostly pointing at HTML but JavaScript could take a shot at it too.


I learned some mod_rewrite tricks for a personal project, only to discover many of my peers consider useful URLs to be unattainable magic!

I think it's a disservice to common users of any site when the address becomes an opaque mess of seemingly random characters.


The hashbang is a big gory mess and glorified for all the wrong reasons. Mostly it brakes best practices and destroys the chance for graceful upgradation and degradation.

What is truly amazing is that there actually are other solutions that still offer the best practices like graceful upgradation and degradation with less effort. This article goes into the issues and the alternatives: https://github.com/balupton/history.js/wiki/Intelligent-Stat...


I agree that JavaScript is breaking the traditional web - linked static HTMLs with some forms in it. But I also believe that one day people will start to forget this bumpy ride. I am thinking of http://nodejs.org/ and http://jsdom.org/ - if the user agent has javascript disabled - the client side javascript code could be executed on the server.


In hashbang schemes the client-side javascript is being used to prevent page loads and to reduce server-side rendering (though this second argument strikes me as a red herring).


" This is what we call “tight coupling” and I thought that anyone with a Computer Science degree ought to have been taught to avoid it."

Unfortunately, there are two reasons why this assumption falls apart. First, there are a lot of non-Computer Scientists out there writing software. Second, the push for everyone-gets-a-degree credentialing and the cries of "we're short on qualified candidates" in industry have led to a serious erosion of standards in universities; either the topics aren't being taught or students are passed without the lesson being absorbed.

On the first point, I think this is a good and a bad thing. It's good because it's democratic. It means that more people are finding their own way, it means that systems are getting more intuitive, it means that barriers to entry are being torn down. Literacy is good for everyone. But it's also bad because the actually trained Computer Scientists and Software Engineers, aren't focusing on keeping this process going. Instead, they just want to open consulting firms and roll out yet another CRUD business intelligence Web app. Concerning, to say the least.

On the second point, there is a serious problem with the number of people we are graduating through universities. And I don't care what the top universities are doing, there are 50 more low-end state schools and technical schools churning out people to each one of them. I know at my university, I received education in such topics of Software Engineering. BUT--and this is a big but--it was all but unrequired knowledge. I can't say any of it was truly "required", because I ended up working with quite a few of my classmates several years later, and they had forgotten it all. They stood in awe of my prowess as a programmer, when all I was doing was using the same lessons we had all received. If you were good enough at programming to avoid basic syntax errors, you were good enough to get through to the end with a barely passing grade. And few hiring managers care about grades.

These people that I graduated with, they have benefited from the democratization of programming as well. We should make this relationship more explicit. Keep Computer Science pure and only graduate people who are up to the chops. Encourage people who want to write websites to go in to Graphic Design. Make programming on equal terms as algebra and creative writing. We all have to study it, we don't all have to be experts, but the people studying the subject specifically had better be held to the highest standards.


As a web developer I believe the problem and the bottleneck has and always will be the browsers. If there was an easy way to give our users what they want (flashy, ajaxy webapps) without breaking the web I'm sure developers would be all for it. But for now it is viewed as a time intensive, value add to do these properly. Web developers will always be a step ahead of the browsers and therefore have to resort to breaking the web in order to achieve the functionality their users/managers pay them to create.

The only way this problem will be resolved is for more of your "true" computer scientists to start working on browsers. However, at the moment, many of them are making too much money at their consulting firms fixing the websites created by graphic designers.


Users don't want hashbangs. They want faster loading content. Hashbangs are a kludge to get it. This is just one example of the issues that we're running up against with trying to extend HTTP, HTML, and browser technology into an application platform. It's just not suited.

HTML 5 is the right answer to the wrong problem. The problem is not how to make HTML more expressive and more powerful. The problem is how to serve applications to users. We need an open standard for WebStart/ClickOnce, not a hack on top of a hack on top of HTML to make web pages appear like applications (with every web designer's own notions as to how to render UI elements and display notifications and control flow, etc.).

Next time you see a JQuery modal dialog, ask yourself, "why did someone on the client level have to write this code? Why wasn't it part of the platform?" Why do we have competing implementations of modal dialogs on web apps, in today's day and age? And it's often represented by a severely nested DIV in a section of code that has no meaning to the modal itself. Semantic markup, for sure.


Good point. I think it should be mandatary for every site to offer content that degrades to an useable version when used without JavaScript.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: