Hacker News new | comments | show | ask | jobs | submit login
Why I chose Clojure/CouchDB for a new site (urbantastic.com)
81 points by neeson 3042 days ago | hide | past | web | 33 comments | favorite

Interesting idea, serving all HTML static and combining that on client side with the dynamic content using JavaScript. And having a pure JSON server. Sounds ground breaking to me, anyone know of other sites/frameworks that would work like this?

"serving all HTML static and combining that on client side with the dynamic content using JavaScript"

So baked into this is lack of support for clients without javascript.

I guess I understand if this is a conscious lack of support for the many lynx/noscript/etc users out there (I'm not going to do that myself and so the architecture is not an option).

But what about search engine crawlers, hasn't javascript + search always been an issue?

I think something like this is a better fit for something more application-like and less web page with dynamic content. I suspect Google will eventually start supporting Javascript as this sort of design becomes more popular.

I suspect Google will eventually start supporting Javascript as this sort of design becomes more popular.

The future started more than two years ago, actually. You can experimentally verify it for yourself -- my understanding is that they use a combination of heuristics and actual evaluation.

For example, putting an invisible div on your web page stuffed with keywords is usually a one way trip to smackdown city. Put an invisible div on your web page and make it visible in response to pushing a button and they will index the content much of the time. They are known to spend extra resources to make sure popular techniques do not cause their algorithms to break. (From a SEO perspective I'd suggest being one step behind the cutting edge on innovations like that. 100% JSON site? Cutting edge, probably uncrawlable. Shopping cart rendered using Prototype? Works fine.)

Hoodwink.d used this idea


And there was another framework that did essentially the same thing, using DHTML and AJAX to create the page from a blank slate. But I cannot recall the name.

Apparently it is a trend with FP guys :-)

HAppS (a Haskell framework) has also advocated this:

"HAppS does not come with a server-side templating system. We prefer the pattern of developing static web pages and using AJAX to populate them with dynamic content."


No, but I've head a suspicion for a while that something like Erlang could actually be useful for doing web stuff if, instead of doing much in the way of templates (Erlang is not fun for strings, IMO), it concentrated on sending and receiving JSON.

Great to see a production site using couchdb. Please keep a running account of how it's working.

So if the user has javascript disabled, nothing on the site works?

The costs of supporting clients with disabled Javascript increases constantly. As Javascript frameworks get increasingly feature-rich, regular users expect more out of their Internet experience, and development techniques mature, it gets easier and more mandatory to do more in Javascript and harder and more expensive to fake the presence of comparable functionality using HTML, forms, etc.

The revenue generated by supporting clients with disabled Javascript is not increasing at nearly the rate support costs are.

I know many technically apt people get up in arms over this, but there comes a point where going into your browser settings (which 99%+ of users will never do), scrolling down to the section marked I Hope You Know What You're Doing, and unchecking boxes means you are affirmatively opting for a second-class experience.

I know the rejoinder: "Blind people can't use your site, you heartless bastard!" It is highly likely that my site and software will be suboptimal to them. It is also highly likely that my site and software will be suboptimal to people who, through no fault of their own, are illiterate. Both of these are tractable issues if someone wants to throw sums of money which are many multiples of my budget to fixing them.

I have yet to hear a good reason for why that someone must be me.

[Edit to clarify: this is not specifically related to the site I have in my profile, but it could be very easily.]

As long as you're fine with web crawlers not seeing your content, people not building mashups based on your HTML, and so forth, be as much of a heartless bastard as you want. But do keep in mind that Googlebot is the biggest disabled user in the world. If blind people can't see it, search engines can't see it. And if search engines can't see your website, who cares about you?

If I want people building mashups with my site, I'll provide an API, HTML is not an API.

If I want Google to see my data, I'll provide it to them when they crawl, more accurately, if I'm ajaxing in data with JavaScript, it's usually explicitly because I don't want crawlers getting to it.

It is not a web developers job to go out of his way to support people who deliberately break their browsers and more often than not contribute very little to the bottom line. Most of us are building apps for people that actually want to use them as intended.

There is a world of difference between blind people using your site versus search engines reading your site.

people not building mashups based on your HTML

It's not a bug, it's a feature!

Hypothetically assuming I had a product targeted at people who were technically capable of developing mashups (or, for that matter, had ever heard the word), I would want to have them use a published API rather than my HTML, because I routinely need to change my HTML. I do not want to have to give everyone 6 weeks of notice every time I do a split test to avoid breaking my core users' sites.

Maybe the question should be "What happens if javascript is disabled?".

I do think that it's important today because there are still web browsers, especially in the mobile space, where people will not have javascript or more specifically AJAX available.

I think it's especially pertinent when talking about social anything kind of sites, where people are going to be likely to try to access it from a mobile device.

I don't see it as an issue of responsibility as much as I see it as a customer service issue.

"it gets easier and more mandatory to do more in Javascript and harder and more expensive to fake the presence of comparable functionality using HTML, forms, etc."

You don't have to "fake the presence of comparable functionality" (and in most cases you simply can't).

What you can and should do is support the subset of functionality raw HTML and CSS sans Javascript can achieve. No one will blame you for not supporting highly dynamic features that can't possibly be achieved without javascript without javascript.

> What you can and should do is support the...

Bullshit, the only thing we should do is whatever the bottom line requires. 99.9% of users don't break their browsers on purpose, so you lose very little and save a ton of time and effort by simply not choosing to go the extra mile for those pesky few that try and break your site.

I wonder about the assertion that this approach would prevent blind people from using a site. I'm not up to speed on the current state of the art of screenreaders for the blind (last time I looked they were still running on DOS) but it would surprise me if they weren't looking at the DOM that resulted from loading the page in a regular browser, rather than reading the HTML directly. If that were the case, all of the javascript stuff should "just work". Anyone know better?

Author here. Here's the relevant line in the post:

"[..]web browsers are not the only clients that will use Urbantastic. Mobile devices, search engine spiders, screen readers for people with disabilities, and RSS readers all need the same data but in different forms. Accommodating any of these is simply a matter of dropping a different rendering front-end in front of the common JSON data server."

It's not up yet, but I'm working on a front end intended for non-javascript users. It will also serve blackberry users, IE6 users, and spiders.

It will be a /much/ simpler site, but you'll be able to get everything done on it. I figured it's easier to separate it out than try to shoehorn every use into one format.

The general principle is that I'm going to design for the large majority of the users - and use Javascript capability I can to make it an excellent experience. Then create a simplifed mirror for the minority uses. Gmail took this route and I think it's worked well for them.

So you've taken the idea of progressive enhancement and reversed it? Zeldman would be rolling in his grave... if he were dead, anyway. I've always found it easier to go the other way: make a site that works on everything, then just add bells, whistles, and enhancements with Javascript as you feel compelled to do so. Doing it this way, I find places I was going to use pure JS where there was no real value-add to it.

Bizarre. Everything from "The Language" down wasn't there when I read this earlier. I guess I must have been having connection problems.

Anyways, thanks for following up - this is a great way to handle clients without Javascript.

IMO people who disable javascript might as well just use lnyx or something. Javascript is the current and future web and nothing is going to change that so get over it.

You're not familiar with TimBL's Principle of Least Power, I see. The majority of the web pages I visit work fine without JavaScript (I use the NoScript extension) and they look a lot better in Firefox than in lynx, or even links.

Any Blackberry released before the Bold does not come with AJAX support.

I'm going to go out on a limb and say that maybe they should think about how the site will work without Javascript.

"crowdsourcing and micro-volunteering"? wow.

All the HTML in Urbantastic is completely static. All dynamic data is sent via AJAX in JSON format and then combined with the HTML using Javascript. Put another way, the server software for Urbantastic produces and consumes JSON exclusively. HTML, CSS, Javascript, and images are all sent via a different service (a vanilla Nginx server).

Wait ... so if he wants to populate the static HTML with information from a database, the client side javascript has to access the database directly? And his database is internet accessible/viewable? That seems bad ...

No, presumably the client clicks on something, which calls an action on his server via xhr, which does some server-side logic (say, update the cart and compute a new total) and returns a json packet, which the client then uses to update the page.

Without knowing the specific details, I'd imagine the json response has directives for what static html to load if needed, which results in more xhr requests to get those files. The client side js simply needs to know how to process the json it's given, it doesn't need to know any business/persistence info.

What I'm curious about is how he handles urls (if everything is xhr, then the url will always stay the same, which is kind of a pain for linking to specific stuff, unless you do anchor workarounds like Facebook does). Also, I'd be curious if he uses the static html files as templates (injecting data into them clientside) or just has a TON of tiny html fragments.

So far, all the HTML that a given page will need is part of the same document. The templates for the dynamic content are all stored inside a hidden div.

I expect that eventually this will cause too much of an up-front load time, so I'm planning on having the JS load bundles of it on demand. Reducing total HTTP requests is a big usability win, in my experience.

To answer your URL question, I use attributes, like this:


Which the server ignores, but the Javascript parses and uses to figure out where it is.

Ah, yeah it'd make much more sense to have all the html handy at once instead of requesting them one at a time.

So in the case of that link... org.html has all the html fragments it needs for anything that can be done on that page in a hidden div?

A neat idea for sure, thanks for sharing... (now off I go to play with it!)

Makes much more sense. Thanks

There's a layer (written in Clojure) between the client and CouchDB which does authentication, permissions, etc.

This is very interesting, but I'm wondering what the cost to users is by pushing most of the computation down to their browser? Certainly, for most of us here, that's not an issue, but what about the person running an older desktop without much processing power and/or RAM?

Probably just a judgment call he'd have to make about his users, but your machine would have to be pretty old and slow to not be able to manipulate the few kilobytes of text that is a typical web page.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact