
JavaScript makes relative times compatible with caching - sant0sk1
http://www.37signals.com/svn/posts/1557-javascript-makes-relative-times-compatible-with-caching
======
axod
Seen this used a few times before, but not usually with ugly custom attributes
and innerHTML :/

Those 2 things really have no place anywhere IMHO.

~~~
bprater
Agreed, I'm surprised they decided to use a non-standard attribute. But it is
a simple and elegant technique if you want to save the server from doing extra
work it doesn't need to.

~~~
axod
Sure, but it's just as easy for the server to output a javascript structure
with the data in really. eg

    
    
      <script>
      var data = {time1: "Nov 5, 1955", time2: "Jan 1, 1970"};
    
      function init() {
        // Update the spans with the "ago"s
      }
      </script>
    
      <span id=time1></span>
    

Using the DOM to store program data just doesn't seem nice to me.

~~~
lhorie
>> Using the DOM to store program data just doesn't seem nice to me.

I disagree. I think the DOM is where data should be (look at meta tags, for
example). Storing data in json style or xml-like attributes sound a bit like
repurposing things.

Also, according to Steve Sounders, sprinkling script tags everywhere (which I
suppose is the easiest way to get the json approach working in a site of
arbitrary complexity) is not a good idea in terms of loading speed.

~~~
axod
What if your data contains html entities, newlines, unicode etc etc etc. HTML
is a hornets nest, whilst js is pretty sane. I would never advocate sprinkling
script tags everywhere :/ 1 is enough usually. (I would obviously in this
case, have 1 script tag at the end of the body, with the data structure, and
the function to initialize the span tags).

~~~
lhorie
As far as encoding goes, my experience has actually been worse on the
javascript side, specifically with french text and regular expressions buried
in 3rd party libraries. Compared to that, grabbing data from the DOM using DOM
text node methods is a walk in the park.

The thing with the json solution is that once we have things being ajaxed in,
it becomes very hard to maintain the code. Has id="date7" been used? What if I
add a comment while the "shoutbox" feature is updating?

Having the data in HTML would let us use something like Jquery's livequery and
it would work regardless of how the data and how much data got into the page
(plus we get the benefit of the no-javascript scenario I mentioned before).

~~~
axod
json? ajax?

The solution I posed was a simple inline javascript at the end of the html.
The js data is generated at the same time as the html output. The IDs all
match, and are used once.

I've never seen any issues with encoding in js, so I'm not sure what you mean
about french text, regexps...

~~~
lhorie
What I'm saying is that once a site becomes more complicated (e.g. if the data
comes after page load, in an ajaxed popup, for example), that simple solution
becomes harder to maintain (or to implement without wasting bandwidth). To be
fair, if you don't work on complex web apps, you'll probably never run into
this.

>> I've never seen any issues with encoding in js, so I'm not sure what you
mean about french text, regexps...

Good for you. I hope you never see these types of bugs, they are nasty to
troubleshoot.

------
andreyf
How far can we generalize this? If an application has millions of users, can
just a little of your server-side load into their browsers can end up saving
real money in server/scaling costs?

More evil-ly, how much computation/storage capacity could Google get by
pushing off work/safety advantage of GMail users' machines? Of course, a
single computer isn't reliable, but with millions of users, it's a simple
matter of failure/redundancy probability. A distributed file system in
browers' Gears databases would work just like GFS, but with higher failure
rates.

~~~
ionfish
To address your first point, GitHub are already doing some of this: they
reduced server loads by lazy-loading commit data when cached data isn't
available.

<http://github.com/blog/293-while-you-were-sleeping>

At work we do a fair amount of client-side form validation. While we have
server-side validation too, doing it in the browser as well vastly reduces the
number of times invalid inputs make it through to the server and have to be
expensively processed. Of course, this creates other problems (making sure the
client- and server-side validation routines precisely mirror one another, for
example), but for some use-cases it's worth it.

