The future started more than two years ago, actually. You can experimentally verify it for yourself -- my understanding is that they use a combination of heuristics and actual evaluation.
For example, putting an invisible div on your web page stuffed with keywords is usually a one way trip to smackdown city. Put an invisible div on your web page and make it visible in response to pushing a button and they will index the content much of the time. They are known to spend extra resources to make sure popular techniques do not cause their algorithms to break. (From a SEO perspective I'd suggest being one step behind the cutting edge on innovations like that. 100% JSON site? Cutting edge, probably uncrawlable. Shopping cart rendered using Prototype? Works fine.)
No, but I've head a suspicion for a while that something like Erlang could actually be useful for doing web stuff if, instead of doing much in the way of templates (Erlang is not fun for strings, IMO), it concentrated on sending and receiving JSON.
I know many technically apt people get up in arms over this, but there comes a point where going into your browser settings (which 99%+ of users will never do), scrolling down to the section marked I Hope You Know What You're Doing, and unchecking boxes means you are affirmatively opting for a second-class experience.
I know the rejoinder: "Blind people can't use your site, you heartless bastard!" It is highly likely that my site and software will be suboptimal to them. It is also highly likely that my site and software will be suboptimal to people who, through no fault of their own, are illiterate. Both of these are tractable issues if someone wants to throw sums of money which are many multiples of my budget to fixing them.
I have yet to hear a good reason for why that someone must be me.
[Edit to clarify: this is not specifically related to the site I have in my profile, but it could be very easily.]
As long as you're fine with web crawlers not seeing your content, people not building mashups based on your HTML, and so forth, be as much of a heartless bastard as you want. But do keep in mind that Googlebot is the biggest disabled user in the world. If blind people can't see it, search engines can't see it. And if search engines can't see your website, who cares about you?
If I want people building mashups with my site, I'll provide an API, HTML is not an API.
It is not a web developers job to go out of his way to support people who deliberately break their browsers and more often than not contribute very little to the bottom line. Most of us are building apps for people that actually want to use them as intended.
Hypothetically assuming I had a product targeted at people who were technically capable of developing mashups (or, for that matter, had ever heard the word), I would want to have them use a published API rather than my HTML, because I routinely need to change my HTML. I do not want to have to give everyone 6 weeks of notice every time I do a split test to avoid breaking my core users' sites.
You don't have to "fake the presence of comparable functionality" (and in most cases you simply can't).
Bullshit, the only thing we should do is whatever the bottom line requires. 99.9% of users don't break their browsers on purpose, so you lose very little and save a ton of time and effort by simply not choosing to go the extra mile for those pesky few that try and break your site.
Author here. Here's the relevant line in the post:
"[..]web browsers are not the only clients that will use Urbantastic. Mobile devices, search engine spiders, screen readers for people with disabilities, and RSS readers all need the same data but in different forms. Accommodating any of these is simply a matter of dropping a different rendering front-end in front of the common JSON data server."
It will be a /much/ simpler site, but you'll be able to get everything done on it. I figured it's easier to separate it out than try to shoehorn every use into one format.
No, presumably the client clicks on something, which calls an action on his server via xhr, which does some server-side logic (say, update the cart and compute a new total) and returns a json packet, which the client then uses to update the page.
Without knowing the specific details, I'd imagine the json response has directives for what static html to load if needed, which results in more xhr requests to get those files. The client side js simply needs to know how to process the json it's given, it doesn't need to know any business/persistence info.
What I'm curious about is how he handles urls (if everything is xhr, then the url will always stay the same, which is kind of a pain for linking to specific stuff, unless you do anchor workarounds like Facebook does). Also, I'd be curious if he uses the static html files as templates (injecting data into them clientside) or just has a TON of tiny html fragments.
So far, all the HTML that a given page will need is part of the same document. The templates for the dynamic content are all stored inside a hidden div.
I expect that eventually this will cause too much of an up-front load time, so I'm planning on having the JS load bundles of it on demand. Reducing total HTTP requests is a big usability win, in my experience.
To answer your URL question, I use attributes, like this:
This is very interesting, but I'm wondering what the cost to users is by pushing most of the computation down to their browser? Certainly, for most of us here, that's not an issue, but what about the person running an older desktop without much processing power and/or RAM?
Probably just a judgment call he'd have to make about his users, but your machine would have to be pretty old and slow to not be able to manipulate the few kilobytes of text that is a typical web page.