I like to think that I know enough Clojure to decipher that the database and metadata is locked during compaction. Is this true? See http://github.com/mmcgrana/fleetdb/blob/master/src/clj/fleet...
I wrote something called LogStore based on the general notion of log-structured data (then learned about the work of Ousterhout et al. in the 90s). For what it's worth, I avoided the need to lock the metadata and the database during compaction, allowing the log to grow during compaction.
Instead of working through the locked metadata (offset per entry) and rewriting the entries to a new file, the compactor works from the end of the log file to the start of the log file. Each record descriptor is actually written after the record data to help the compactor easily skip over older versions of records it has processed already during the compaction iteration.
Once the compactor reaches the start of the file, it checks to see if the file grew (more appends, new data) since the beginning of the compaction, and starts over from the new EOF but stopping at the previous EOF. This repeats until the compactor fully catches up.
Then the database is temporarily locked, the new metadata is swapped in, and the existing log file is overwritten with the compacted log file.
When a compaction is requested, the compact query enters the write queue. When the query reaches the head of queue, an asynchronous compaction is started on a snapshot of the database at that time, to a file in /tmp. Also, a buffer is added to the database metadata in which subsequent queries will be stored. After this essentially instant operation, the database can proceed to process write queries as normal. While the compaction is ongoing, write queries are appended to the buffer mentioned above. When the compaction thread that was spawned earlier finishes writing its snapshot, it inserts into the write queue a request to finalize compaction. When this request reaches the head of the queue, it appends all the buffered queries to the compacted database file. Finally, it swaps the compacted file in /tmp to the regular database path. So writes are blocked once for an instant and once for however long it takes to write those buffered queries, which shouldn't be long either. Note that reads are never blocked by compaction; indeed they are never blocked in general.
Why not just open a new file at compaction-start instead of an in-memory buffer? When compaction ends, append the newly open file to the compacted file, then swap-in the compacted file as the current log file.
I suppose deciding on whether to buffer in memory or on disk would depend on several factors:
1) how much compaction is required and thus how long compaction might take to complete
2) historical write-rate average
3) buffer size threshold
4) compaction time threshold
By thresholding I mean: start buffering in memory and then switch to a file on disk if compaction starts taking "too long" to complete or the buffer in memory becomes "too large".
edit: very nice project, btw. I always fancied writing a db in lisp.
I don't think that 1) will be that much of a problem: the amount of writes that the db can process in the time it takes to do a compaction of even a large db should be small compared to the rate at which they can be appended to the compacted file.
2) is more problematic; I'll need to add a timeout-like guard to prevent a runaway write buffer.
Thanks for your work, and welcome to the in-memory database developers crew ;)