Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Assuming everything is designed to scale linearly to some point well above the current workload, a better way to phrase this is to ask what percentage of your hardware spend goes to compute, network and storage machines.

Note that a single postgres instance with one core and 1gb of ram on 1 ssd can “scale linearly” by my definition, since you can easily double, triple, etc all those specs.

On the other hand, a fully populated data center running scale out software at 100% power capacity can’t scale linearly anymore, because you can’t upgrade it anymore. For that data center, power or maybe space is the bottleneck.

Short answer to the question you asked: SSD’s are not going to be the bottleneck for software that’s migrating off disk, and the database probably won’t either. Hardware trends mean the storage and database just got 10-100x faster, while the business logic maybe doubled in speed.

If the system was well balanced before, that means no one spent then-unnecessary effort on now-necessary optimizations on the compute side.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: