Hacker News new | past | comments | ask | show | jobs | submit login

Given enough RAM on a Linux machine one may use tmpfs, which maintains a RAM disk and at any moment only uses the amount of RAM needed, with a pre-defined limit.

On PostgreSQL create an adequately-caped tmpfs, create a TABLESPACE on it, then store temporary tables into this TABLESPACE. No SSD (I have access to) beats this. Hint: before shutting PG down you may DROP this TABLESPACE.

It also is useful for a blockchain, amazingly fast (and a relief for HDDs), in most cases alleviating the need for a SSD. Place the blockchain file(s) on the tmpfs mount. Before machine shutdown stop any blockchain-using software, then store a compressed copy of the blockchain file(s) on permanent storage (I use "zstd -T0 --fast"...), and upon reboot restore it on the tmpfs mount. If anything fails the blockchain-writing software will re-download any missing block.




While tmpfs can be very useful even as it is, users must beware that copying a file from another Linux file system to tmpfs can lose a part of the file metadata, without giving any warnings or errors.

The main problem is that copying a file to tmpfs will drop extended attributes. Old versions of tmpfs dropped all extended attributes, modern versions of tmpfs keep some security-related extended attributes, but they still drop any user-defined extended attributes.

Old versions of tmpfs truncated some high-resolution timestamps, e.g. those coming from xfs, but I do not know if this still happens on modern versions of tmpfs.

Before learning these facts, I could not understand while some file copies lost parts of their metadata, after being copied via /tmp between 2 different users, on a multi-user computer where /tmp was mounted on tmpfs.

Now that I know, when I have to copy a file via tmpfs, I have to make a pax archive, which preserves file metadata. Older tar archive formats may have the same problems like tmpfs.


Isn't this extremely dangerous? Disk write caches aren't used most of the time, except on battery backed HBAs. And databases are typically configured to use O_DIRECT for a reason: COMMITs are supposed to be durable. We had this fight at a previous company when an engineer based database server hardware recommendation on a dangerously misconfigured database server, and did not consider the effect of caches. As soon as a safe configuration was used in production, performance dropped off a cliff, particularly on random IO. So the question we had to ask was: do you want to trade durability for performance? Or do you now have to carve up your databases into shards that fit the IO performance characteristics of the badly chosen servers you purchased, and waste rack space and CPU power?


Parent is talking about temporary tables. Those are normally only live for the duration of a transaction (well, session, but in practice if you're using temporary tables across multiple transactions you have a logical application-level transaction which needs to be able to handle failure part-way through). After your transaction the writes to non-temporary tables should be persistent.

Postgres temp tables on ramdisk are a problem for a different reason, the WAL, as pointed to by a sibling comment.


> Postgres temp tables on ramdisk are a problem for a different reason, the WAL, as pointed to by a sibling comment

TEMPORARY tables are UNLOGGED, and therefore they aren't WALed


Gotcha, somehow missed that. Yeah, tmp tables on disks are painful and I've made the same optimization on MySQL whenever it wasn't possible to eliminate the need to tmp tables by refactoring SQL.


Could you relate your day experience to 2ndquandrant's (contradictory?) advice?

https://www.2ndquadrant.com/en/blog/postgresql-no-tablespace...


TEMPORARY tables are UNLOGGED, and therefore they aren't WALed

See https://www.postgresql.org/message-id/CAB7nPqTkZvESuZ3qcN_Tj...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: