Hacker News new | past | comments | ask | show | jobs | submit login

One of my blog articles got on the front page of HN a while ago: https://news.ycombinator.com/item?id=29185971

In total, I saw roughly around 27'000 views (that entire day, load varied, too lazy to figure out the peak RPS value), which meant just short of 8 GB of data being transferred and in my case over 500'000 files being requested (given all of the CSS files, images, JavaScript etc.).

Now, the blog held out fine, because it was based on Grav, which means that it ends up being a bunch of flat files: https://getgrav.org/

It is especially interesting when you consider that I had capped the container that it was running in to 512M of RAM at max and also 0.75 CPU cores, just so it wouldn't slow down the entire node (can't really afford to have a separate server for it).

So in essence, I think that static files can be served really well with limited resources, but once you throw complicated PHP apps (think WordPress), insufficient caching and also database access (especially with any sub-optimally written code) as well as perhaps even something like mod_php instead of fpm, things can indeed go wrong.

I've seen enterprise projects struggle with 100 RPM due to exceedingly poorly written data fetching, N+1 problems and developers not knowing how to avoid running into issues like that or outright not caring because of the system being an internal one and the infrastructure having resources to waste.

In my case, even the cheap 100 Mbps link was enough to serve all of the requests with minimal impact (i think the longest page load was up to 4 seconds at peak load, most others were lower, less than 2 seconds).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: