Hacker News new | past | comments | ask | show | jobs | submit login
Saving server costs with Javascript (markmaunder.com)
2 points by mmaunder on July 18, 2007 | hide | past | favorite | 2 comments



It's less of a benefit than you might think, because most websites are not CPU-bound. The bottleneck tends to be disk seeks, particularly if you use a database. You can't move I/O to your visitor's machines because you don't have access to their disks.

Perhaps it'd be more useful for sites like Mailinator or news.YC that keep everything in memory, but sites like that usually don't have performance problems anyway. And bandwidth typically becomes a bottleneck if you try to use a distributed memory cache over consumer Internet connections.

It'd be really useful for websites with heavy algorithmics, but how many websites like that do you know? I work in the financial sector, and even there this isn't feasible, because the data needed totals about 50 GB/day and there's no way in hell you can transfer that over a dialup modem.


Sure, massive distributed processing is a stretch. But as a general guideline for app architecture, this is IMHO a pretty good one. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: