One well designed fast app server can serve 1000 requests per second per processor core, and you might have 50 processor cores in a 2U rack, for 50,000 requests per second. For database access, you now have fast NVMe disks that can push 2 million IOPS to serve those 50,000 accesses.
50,000 requests per second is good enough for a million concurrent users, maybe 10-50 million users per day.
If you have 50 million users per day, then you're already among the largest websites in the world. Do you really need this sort of architecture for your startup system?
If anything, you'd probably need a more distributed system that reduces network latencies around the world, instead of a single scale-out system.
I'm seeing about twice that on higly dynamic PHP pages with ~10 read/writes from/to MariaDB(running on the same machine).