A $5 DO droplet will easily handle peak HN traffic with a bit of optimization. Set cache headers, cache static assets on the server side (or even better, with an external CDN), or even cache the entire page if there's no dynamic content.
Without optimization, Nginx defaults to allowing a maximum of one worker process with 512 connections and 60s timeout. 20 unique visitors per second will lead to 500 errors in less than a minute.
It does not matter how fast the backend is. A persistent HTTP connection will last 60 seconds following the latest client request, unless the browser goes out of its way to explicitly close the connection.
P.S. OP's website uses Apache but the same issue of overly conservative limits still apply.
There's no way the connection just sits idle and the worker can't serve other requests for the full timeout, right? That just sounds... Wrong. And is not consistent with load testing I've done before with a nginx setup
There are multiple strategies for caching in the server, without them IIRC the php code will be interpreted on each request, files parsed, and obviously hit the database for each request.
There's fastcgi caching in nginx, php opcode in php-fpm, and WP specific cave plugins like Super Cache. At least this was the case ~10 years ago.
Also for $5/mo you can use Cloudflare APO to cache WordPress pages at the edge. Yes it will cache even the "dynamic" pages (unless you're logged in of course)
That's fine, I was just listing another option. It should be noted you should still do server side caching. This just lets you serve from Cloudflare's caching layers too
Are you sure? My understanding was that nginx would fill up all free connections up to the max, but then would begin draining idle connections so it could create new ones for new visitors.
I run a few big sites on WordPress, and would totally recommend you use the simply static plugin, and configure nginx to serve those cached files directly so it never hits php. I also usually put cloudflare in front as well, but you can have a screaming fast wordpress site that scales really far by itself without it just with that trick.
If you need some dynamism, w3 total cache is also a great choice.
Check out this thread for recommendations on static site generator plugins on top of Wordpress. Best of both worlds for folks who want high-level publishing.
Looks like the free one has only 1GB outbound transfer which wouldn't be enough for a single hug, but the $5/month one has 40GB which might be enough for a hug with text only content.
They were grateful the thing got so much attention, and deferred to other hosts to supplement serving that attention. Not looking for uptime solutions.
(Now that I’ve tried to RTFA, it seems homie has a runtime dependence (db?) or something colocated on the host failing? OWASP, right? Having a resource limit reached maybe? Nextjs seems like a cool approach to manage partially ~runtime-dynamic systems like this/cms/etc)
I was thinking about this same problem for a couple of days. Another question I have is: do we have a higher number of top-level comments on a given number of months or not?
It's a static site, no reason for the server not to be able to handle thousands of connections at once with almost no configuration changes with some like Nginx or Apache. Or even a domain which points directly to a S3 bucket. Hope you're not looking at devop roles on who is hiring posts.