Hacker News new | past | comments | ask | show | jobs | submit login

That's simplifying things a bit.

Postgres has a shared memory cache, which can be set the same as redis, so your operations will all happen with in memory, with some background stuff putting it onto disk for you in the case your computer shuts off. Storage won't be involved.

BUT, postgres still has ~6x the latency [1], even when run from memory.

[1] https://medium.com/redis-with-raphael-de-lio/can-postgres-re....




Pretty pointless benchmark since it does not use prepared statements (that much is obvious from the low TPS, plus I confirmed it with how he ran the benchmark). You need to pass "-M prepared" to pgbench. And it is very possible that the Redis benchmark is equally flawed.

If you have to parse and plan queries every time PostgreSQL is obviously much slower than Redis. It is much more interesting to see what happens if prepared statements are used.


The article is flawed, here's an issue of someone fixing some of the issues:

https://github.com/raphaeldelio/redis-postgres-cache-benchma...


Thanks, I spotted the lack of prepared statements and stopped looking after that but you spotted the rest of the issues too. And, agreed, it is likely that the Redis benchmark is flawed too but I do not know Redis well enough.


But the whole point here is that PostgreSQL will be used for other tasks e.g. storing all of your business data. So it will be fighting for the shared cache as well as the disk. And of course storage will still be involved as again you can't have in-memory only tables.

And having a locking system fluctuate in latency between milliseconds and seconds would cause all sorts of headaches.


If you are both small enough that you’re considering cohosting the app and DB, the odds are good that your working set is small enough to comfortably fit into RAM on any decently-sized instance.

> And having a locking system fluctuate in latency between milliseconds and seconds would cause all sorts of headaches.

With the frequency that a locking system is likely to be used, it’s highly unlikely that those pages would ever get purged from the buffer pool.


Once again. PostgresSQL does not have in-memory tables. So:

a) Writes will be happening on a slow, highly contentious disk.

b) You have no guarantees that the data will be in-memory.

Both of which make it a poor solution for use cases such as locking.


As pointed out by blackenedgem above: PostgreSQL has tablespaces, and one may simply declare tables which should stay in RAM in a tablespace built upon a tmpfs with enough reserved RAM to store them all. There is a only small associated burden (restoring the tablespace while starting PG).


> And of course storage will still be involved as again you can't have in-memory only tables.

That's fine. If what you want is a single-node deployment combined with in-memory purism them SQLite has your back.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: