Hacker News new | past | comments | ask | show | jobs | submit login

> You generally want to keep your CPU fully utilized

Only if your load is very predictable. If there is a chance of a spike, you often want enough headroom to handle it. Even if you have some kind of automated scaling, that can take time, and you probably want a buffer until your new capacity is available.




I think many here is misunderstanding what was likely meant: postgresql was not able to use all the available CPU under this situation, in that it was oscillating from 10% to 70% CPU use. That 40% average cpu use isn't an asset on a dedicated database server: it just means that the other 60% of available cycles are a perishable resource that are immediately spoiling.

In that sense, you want to be able to have your database be able to use all the resources available: all the IOPS, all the CPU cycles, etc.

And, of course, the real thing is the amount of work you get done: this thing does more work-- partially by using more CPU cycles, and partially by doing more work per CPU cycle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: