That is, new and low traffic sites are crawled by less intelligent bots, and as the site gets more visitors or better rankings, more complicated and resource intensive bots are deployed.
How this might work with the most popular sites out there, the Amazons and Wikipedias of this world - I'm not so sure about that. If I were in charge, I'd be tempted to have customised bots and ranking weights for each of these exceptional sites.
Sadly the chances of getting a real answer on this in my lifetime are close to zero.
I'd expect that there are also other heuristics and different strategies for crawling to better handle eg. content presented by one of the popular CMSes.
heuristics to see whether
executing JS would be worth
it, would yield additional
If you look at Google's cached version, you can see that the JS is executed (although it fails trying to download the actual data): https://webcache.googleusercontent.com/search?q=cache:hN5yCk...
Edit: as has been pointed out below, the cached version is just the same as the original and the JS gets executed on your end. This doesn't show weather Google also executes it during its crawl.
Correct me if I am wrong but when I look at the cached version of the homepage (http://webcache.googleusercontent.com/search?q=cache:hN5yCky...), I don't see that the JS has been interpreted.
Second, thank you for responding to someone pointing out that you were wrong without putting yourself on the defensive.
I see this entire sub-thread as a positive; glad we all could learn along with you!
How this might work with the most popular sites out there?
We see it in on-page answers that provide extracts of pages with the answers to questions asked in search phrases that include a reference to the document they were sourced from.
Matt Cutts used to qualify sites like Wikipedia as "reputable" to the eyes of the search engine.