Hacker News new | past | comments | ask | show | jobs | submit login

Of course caching is enabled. Reads are all coming from cache. Performance problems come on writes, which currently wait for the disk, always, even though I don't care if they do...I'd be happy with it writing to disk five minutes later, but that's not an option I've been able to find.



Take a look at this option on MySQL innodb. I've used this in the past to radically improve speed:

"If the value of innodb_flush_log_at_trx_commit is 0, the log buffer is written out to the log file once per second and the flush to disk operation is performed on the log file, but nothing is done at a transaction commit. When the value is 1, the log buffer is written out to the log file at each transaction commit and the flush to disk operation is performed on the log file. When the value is 2, the log buffer is written out to the file at each commit, but the flush to disk operation is not performed on it. However, the flushing on the log file takes place once per second also when the value is 2. Note that the once-per-second flushing is not 100% guaranteed to happen every second, due to process scheduling issues."

From https://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.ht...


Why don't you try SSD storage? The database is so small it shouldn't be expensive.


We are moving to SSD. It just takes a while. This is the last of our servers to not have SSD. But, nonetheless, it seems obvious that for many use cases, one should be able to say, "I don't care if it's on disk immediately. Just be fast."


You sort-of can do that with Postgres.

The "nice" option is to tune "fsync", "synchronous_commit" and "wal_sync_method" in postgresql.conf

If your overall write load is low enough that you will catch up over reasonable amounts of time, the really dirty method is to set up replication (internally on a single server if you like) and put the master on a ram disk. If your server crashes, just copy the slaves data directory and remove the recovery.conf file, and you'll lose only whatever hadn't been replicated.

But in terms of time investment in solving this, it's likely going to be cheaper to just stick an SSD in there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: