Hacker News new | comments | show | ask | jobs | submit login

What kind of numbers are they talking about for it to be "large-scale"?

One well designed fast app server can serve 1000 requests per second per processor core, and you might have 50 processor cores in a 2U rack, for 50,000 requests per second. For database access, you now have fast NVMe disks that can push 2 million IOPS to serve those 50,000 accesses.

50,000 requests per second is good enough for a million concurrent users, maybe 10-50 million users per day.

If you have 50 million users per day, then you're already among the largest websites in the world. Do you really need this sort of architecture for your startup system?

If anything, you'd probably need a more distributed system that reduces network latencies around the world, instead of a single scale-out system.




And 1K rps/core isn't something that's necessarily hard to achieve, if someone thinks otherwise.

I'm seeing about twice that on higly dynamic PHP pages with ~10 read/writes from/to MariaDB(running on the same machine).


Why not have a scale-up system?


Because it costs money and slows development and ops down. Is there a good reason for getting it when you are not one of the ~200 companies in the world with enough scale to use it?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: