I guess I understand if this is a conscious lack of support for the many lynx/noscript/etc users out there (I'm not going to do that myself and so the architecture is not an option).
The future started more than two years ago, actually. You can experimentally verify it for yourself -- my understanding is that they use a combination of heuristics and actual evaluation.
For example, putting an invisible div on your web page stuffed with keywords is usually a one way trip to smackdown city. Put an invisible div on your web page and make it visible in response to pushing a button and they will index the content much of the time. They are known to spend extra resources to make sure popular techniques do not cause their algorithms to break. (From a SEO perspective I'd suggest being one step behind the cutting edge on innovations like that. 100% JSON site? Cutting edge, probably uncrawlable. Shopping cart rendered using Prototype? Works fine.)
And there was another framework that did essentially the same thing, using DHTML and AJAX to create the page from a blank slate. But I cannot recall the name.
HAppS (a Haskell framework) has also advocated this:
"HAppS does not come with a server-side templating system. We prefer the pattern of developing static web pages and using AJAX to populate them with dynamic content."
I know many technically apt people get up in arms over this, but there comes a point where going into your browser settings (which 99%+ of users will never do), scrolling down to the section marked I Hope You Know What You're Doing, and unchecking boxes means you are affirmatively opting for a second-class experience.
I know the rejoinder: "Blind people can't use your site, you heartless bastard!" It is highly likely that my site and software will be suboptimal to them. It is also highly likely that my site and software will be suboptimal to people who, through no fault of their own, are illiterate. Both of these are tractable issues if someone wants to throw sums of money which are many multiples of my budget to fixing them.
I have yet to hear a good reason for why that someone must be me.
[Edit to clarify: this is not specifically related to the site I have in my profile, but it could be very easily.]
It is not a web developers job to go out of his way to support people who deliberately break their browsers and more often than not contribute very little to the bottom line. Most of us are building apps for people that actually want to use them as intended.
It's not a bug, it's a feature!
Hypothetically assuming I had a product targeted at people who were technically capable of developing mashups (or, for that matter, had ever heard the word), I would want to have them use a published API rather than my HTML, because I routinely need to change my HTML. I do not want to have to give everyone 6 weeks of notice every time I do a split test to avoid breaking my core users' sites.
I think it's especially pertinent when talking about social anything kind of sites, where people are going to be likely to try to access it from a mobile device.
I don't see it as an issue of responsibility as much as I see it as a customer service issue.
You don't have to "fake the presence of comparable functionality" (and in most cases you simply can't).
Bullshit, the only thing we should do is whatever the bottom line requires. 99.9% of users don't break their browsers on purpose, so you lose very little and save a ton of time and effort by simply not choosing to go the extra mile for those pesky few that try and break your site.
"[..]web browsers are not the only clients that will use Urbantastic. Mobile devices, search engine spiders, screen readers for people with disabilities, and RSS readers all need the same data but in different forms. Accommodating any of these is simply a matter of dropping a different rendering front-end in front of the common JSON data server."
It will be a /much/ simpler site, but you'll be able to get everything done on it. I figured it's easier to separate it out than try to shoehorn every use into one format.
Without knowing the specific details, I'd imagine the json response has directives for what static html to load if needed, which results in more xhr requests to get those files. The client side js simply needs to know how to process the json it's given, it doesn't need to know any business/persistence info.
What I'm curious about is how he handles urls (if everything is xhr, then the url will always stay the same, which is kind of a pain for linking to specific stuff, unless you do anchor workarounds like Facebook does). Also, I'd be curious if he uses the static html files as templates (injecting data into them clientside) or just has a TON of tiny html fragments.
I expect that eventually this will cause too much of an up-front load time, so I'm planning on having the JS load bundles of it on demand. Reducing total HTTP requests is a big usability win, in my experience.
To answer your URL question, I use attributes, like this:
So in the case of that link... org.html has all the html fragments it needs for anything that can be done on that page in a hidden div?
A neat idea for sure, thanks for sharing... (now off I go to play with it!)