Thought I would share this with the HN crowd. We have a PGSQL 11 server running in Azure that was getting low on disk space. Instead of adding more space (and more $$$), we decided to compare the performance of a compressed ZFS volume with a standard EXT-4 volume. Initial testing showed a 5x compression ratio for our data(!), however, query performance was much worse than EXT4.
After doing many days of testing/tuning, I discovered enabling ZFS volume compression also enables ZFS RAM compression, which, in turn, can create up to a 40% performance hit when reading data from the ZFS ARC. Seems this ARC compression issue[1] has been seen before but did not turn up in my initial google searching. I started a thread[2] over on the ZFS on Linux discussion group and was given a couple of suggestions. With additional tuning/testing, we finally got ZFS to perform as good as (if not better than) EXT-4 for our use case.
Additionally, per this thread [3] in github, "The [linux] 5.3 kernel adds a new feature which allows pages to be zeroed when allocating or freeing them: init_on_alloc and init_on_free. init_on_alloc is enabled by default on Ubuntu 18.04 HWE kernel. ZFS allocates and frees pages frequently (via the ABD structure), e.g. for every disk access. The additional overhead of zeroing these pages is significant". Adding these two options to the boot loader also gained another 8-10% performance.
The TLDR:
* Disable ARC RAM compression (options zfs zfs_compressed_arc_enabled=0)
* Disable ADB Scatter (options zfs zfs_abd_scatter_enabled=0)
* Disable zeroing of mem pages when adding or freeing (add "init_on_alloc=0 init_on_free=0" to the kernel boot loader)
Note: This issue was specific to our use case and may not apply to all use cases...
After doing many days of testing/tuning, I discovered enabling ZFS volume compression also enables ZFS RAM compression, which, in turn, can create up to a 40% performance hit when reading data from the ZFS ARC. Seems this ARC compression issue[1] has been seen before but did not turn up in my initial google searching. I started a thread[2] over on the ZFS on Linux discussion group and was given a couple of suggestions. With additional tuning/testing, we finally got ZFS to perform as good as (if not better than) EXT-4 for our use case.
Additionally, per this thread [3] in github, "The [linux] 5.3 kernel adds a new feature which allows pages to be zeroed when allocating or freeing them: init_on_alloc and init_on_free. init_on_alloc is enabled by default on Ubuntu 18.04 HWE kernel. ZFS allocates and frees pages frequently (via the ABD structure), e.g. for every disk access. The additional overhead of zeroing these pages is significant". Adding these two options to the boot loader also gained another 8-10% performance.
The TLDR:
Note: This issue was specific to our use case and may not apply to all use cases...[1] https://github.com/openzfs/zfs/issues/12813
[2] https://zfsonlinux.topicbox.com/groups/zfs-discuss/T5122ffd3...
[3] https://github.com/openzfs/zfs/issues/9910 (see comments from ahrens)