
Why I chose Clojure/CouchDB for a new site - neeson
http://blog.urbantastic.com/post/81336210/tech-tuesday-the-fiddly-bits
======
asmosoinio
Interesting idea, serving all HTML static and combining that on client side
with the dynamic content using JavaScript. And having a pure JSON server.
Sounds ground breaking to me, anyone know of other sites/frameworks that would
work like this?

~~~
timf
" _serving all HTML static and combining that on client side with the dynamic
content using JavaScript_ "

So baked into this is lack of support for clients without javascript.

I guess I understand if this is a conscious lack of support for the many
lynx/noscript/etc users out there (I'm not going to do that myself and so the
architecture is not an option).

But what about search engine crawlers, hasn't javascript + search always been
an issue?

~~~
Zak
I think something like this is a better fit for something more application-
like and less web page with dynamic content. I suspect Google will eventually
start supporting Javascript as this sort of design becomes more popular.

~~~
patio11
_I suspect Google will eventually start supporting Javascript as this sort of
design becomes more popular._

The future started more than two years ago, actually. You can experimentally
verify it for yourself -- my understanding is that they use a combination of
heuristics and actual evaluation.

For example, putting an invisible div on your web page stuffed with keywords
is usually a one way trip to smackdown city. Put an invisible div on your web
page and make it visible in response to pushing a button and they will index
the content much of the time. They are known to spend extra resources to make
sure popular techniques do not cause their algorithms to break. (From a SEO
perspective I'd suggest being one step behind the cutting edge on innovations
like that. 100% JSON site? Cutting edge, probably uncrawlable. Shopping cart
rendered using Prototype? Works fine.)

------
geuis
Great to see a production site using couchdb. Please keep a running account of
how it's working.

------
icey
So if the user has javascript disabled, nothing on the site works?

~~~
patio11
The costs of supporting clients with disabled Javascript increases constantly.
As Javascript frameworks get increasingly feature-rich, regular users expect
more out of their Internet experience, and development techniques mature, it
gets easier and more mandatory to do more in Javascript and harder and more
expensive to fake the presence of comparable functionality using HTML, forms,
etc.

The revenue generated by supporting clients with disabled Javascript is not
increasing at nearly the rate support costs are.

I know many technically apt people get up in arms over this, but there comes a
point where going into your browser settings (which 99%+ of users will never
do), scrolling down to the section marked I Hope You Know What You're Doing,
and unchecking boxes means you are affirmatively opting for a second-class
experience.

I know the rejoinder: "Blind people can't use your site, you heartless
bastard!" It is highly likely that my site and software will be suboptimal to
them. It is also highly likely that my site and software will be suboptimal to
people who, through no fault of their own, are illiterate. Both of these are
tractable issues if someone wants to throw sums of money which are many
multiples of my budget to fixing them.

I have yet to hear a good reason for why that someone must be me.

[Edit to clarify: this is not specifically related to the site I have in my
profile, but it could be very easily.]

~~~
liminalist
As long as you're fine with web crawlers not seeing your content, people not
building mashups based on your HTML, and so forth, be as much of a heartless
bastard as you want. But do keep in mind that Googlebot is the biggest
disabled user in the world. If blind people can't see it, search engines can't
see it. And if search engines can't see your website, who cares about you?

~~~
gnaritas
If I want people building mashups with my site, I'll provide an API, HTML is
not an API.

If I want Google to see my data, I'll provide it to them when they crawl, more
accurately, if I'm ajaxing in data with JavaScript, it's usually explicitly
because I don't want crawlers getting to it.

It is not a web developers job to go out of his way to support people who
deliberately break their browsers and more often than not contribute very
little to the bottom line. Most of us are building apps for people that
actually _want_ to use them as intended.

------
herval
"crowdsourcing and micro-volunteering"? wow.

------
smanek
_All the HTML in Urbantastic is completely static. All dynamic data is sent
via AJAX in JSON format and then combined with the HTML using Javascript. Put
another way, the server software for Urbantastic produces and consumes JSON
exclusively. HTML, CSS, Javascript, and images are all sent via a different
service (a vanilla Nginx server)._

Wait ... so if he wants to populate the static HTML with information from a
database, the client side javascript has to access the database directly? And
his database is internet accessible/viewable? That seems bad ...

~~~
Xichekolas
No, presumably the client clicks on something, which calls an action on his
server via xhr, which does some server-side logic (say, update the cart and
compute a new total) and returns a json packet, which the client then uses to
update the page.

Without knowing the specific details, I'd imagine the json response has
directives for what static html to load if needed, which results in more xhr
requests to get those files. The client side js simply needs to know how to
process the json it's given, it doesn't need to know any business/persistence
info.

What I'm curious about is how he handles urls (if everything is xhr, then the
url will always stay the same, which is kind of a pain for linking to specific
stuff, unless you do anchor workarounds like Facebook does). Also, I'd be
curious if he uses the static html files as templates (injecting data into
them clientside) or just has a TON of tiny html fragments.

~~~
neeson
So far, all the HTML that a given page will need is part of the same document.
The templates for the dynamic content are all stored inside a hidden div.

I expect that eventually this will cause too much of an up-front load time, so
I'm planning on having the JS load bundles of it on demand. Reducing total
HTTP requests is a big usability win, in my experience.

To answer your URL question, I use attributes, like this:

<http://urbantastic.com/org.html?id=org-8srmt85mtf8t>

Which the server ignores, but the Javascript parses and uses to figure out
where it is.

~~~
Xichekolas
Ah, yeah it'd make much more sense to have all the html handy at once instead
of requesting them one at a time.

So in the case of that link... org.html has all the html fragments it needs
for anything that can be done on that page in a hidden div?

A neat idea for sure, thanks for sharing... (now off I go to play with it!)

------
bmj
This is very interesting, but I'm wondering what the cost to users is by
pushing most of the computation down to their browser? Certainly, for most of
us here, that's not an issue, but what about the person running an older
desktop without much processing power and/or RAM?

~~~
Xichekolas
Probably just a judgment call he'd have to make about his users, but your
machine would have to be pretty old and slow to not be able to manipulate the
few kilobytes of text that is a typical web page.

