Hacker News new | past | comments | ask | show | jobs | submit login

Also, people underestimate the power of serving out of RAM. It's not unreasonable to serve 20-30K QPS off a single server if the work it needs to do is limited to minimal request parsing and fetching some data from main memory. That's about 2.5 billion requests/day, fully loaded. Granted, I'm thinking something more like memcached than a fully-formed webserver, but an in-memory webserver that stores its data in hashtables (like news.yc) and has a really fast templating languages, or just writes output directly to the socket, could probably come close.



I use redis for this exact reason -- I prerender over 2,000 page templates twice a day, and store them in RAM. The app server has to do a little processing before sending the pages to users -- it picks a different template depending on whether the user's logged in or not, and then substitutes the user's info into the template (for logout/profile links). The session info is also stored in redis. This lets me reboot the server and be ready to serve pages again almost as soon as it's back up. With all the data, redis uses about 300-400MB RAM on a 64bit Debian VM.

I use a VPS for my site, and on a VPS, the only thing you're allocated that you can depend on always being available is RAM. The processor cores might be shared with a busy user, and you can't always depend on high disk I/O speeds.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: