Hacker News new | past | comments | ask | show | jobs | submit login

If you are using PostgreSQL, your data is operating in RAM if your data is small enough to fit. The write-ahead log gets a sequential copy of writes that are immediately moved to disk but the rest is lazily applied in the background.

Given a halfway competent I/O scheduler and some cheap SSDs, you can continuously write new data to disk at network wire speed even at 10 GbE while operating on the data in RAM and saturating outbound network. There is no slowdown at all. Even for databases that do not implement a good I/O scheduler (like PostgreSQL unfortunately) your workload is sufficiently trivial that backing it with SSD should have no performance impact. If you are having a performance problem with 1.4GB CMS, it is an architecture problem, not a database problem.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact