> What bothered me about the original implementation was the lookup table. Even though I knew they’d be cached, I still thought the memory accesses might have a detrimental affect on performance.
The encoding lookup table is an array of four chars. I'd be surprised if accessing such an array has a detrimental effect on any program.
I also wonder why the graphs say "Encode / Decode", as if you are combining the performance of the encoding function and the decoding. Have you considered separating them?
It would also help reproducibility if you include your compiler, versions, and flags. You mentioned that you've turned off all optimizations, but I wonder why.
"O0" would certainly produces a lot of jumps for the switch. But O2 is certainly going to eliminate those jumps. In fact, with O2, gcc11 seems to produce identical code for switch and lookup table.
I'd expect that to be faster just from having the lookup table in the code itself, and not having to use a different cache line from the data segment (as well as avoiding pointer indirection).
Interesting approach with the polynomials! I wonder whether the performance of that one can be improved – while the coefficients aren’t integer, they become integer when multiplied by 3. So I wonder whether evaluating a whole-coefficient polynomial (using Horner’s scheme) and dividing by 3 at the end (which the compiler might optimize to a multiplication) might provide a speed boost:
unsigned char poly_encode_2(const unsigned char dibit) {
return (27 + x * (14 + x * (-18 + x * 7))) / 3;
}
When I first saw this, I thought it ought to be possible to express the encoding/decoding in simple computations, but I only just got a chance to try things out[1].
There are many things I'd fiddle with in the original source. I noticed that since I first saw this post, someone even contributed an AVX2 implementation[2]. I stayed strictly in the standard C land:
This function converts a given character to the corresponding half-nibble (or dibit, don't really know what to call a pair of bits):
I haven't actually had a chance to investigate any performance gains, but this answers the question of whether we can replace the lookup tables with simple functions with no branching.
Horner's method serializes the entire computation so it is penalized in modern hardwares. When the accuracy is less important it is frequent to split the computation into a few chunks where each chunk uses Horner's method but they can be computed in parallel.
This brings back fond memories of the time when I did video processing code. So much more fun doing this kind of work on self contained code versus server side stuff dealing with dozens of misbehaving external APIs.
In reality, if you have many bytes in the buffer and calling these functions in a loop, compilers should inline the function, loading all these magic vectors outside the loop. This way it won’t be any memory loads, except to fetch the source data.
While I applaud the efforts to document the optimization processes, they are fundamentally slow. The easiest and portable optimization would be converting 8 full bits at a time:
Note that the resulting encode_lookup_tbl2 depends on the endianness, but as long as we make the table on demand it can remain portable. Also `ws_out` might have to be aligned for the potability (unaligned memory access can be slow at best and trap at worst) but it is easy to fulfill when those functions are only used internally. In my measurement this alone made it ~3.5x faster.
The next obvious approach is to use SIMD. Pretty much every modern CPU gives you 16-byte shuffle, which can be used to convert 4 bits at a time. (EDIT: This turned out to be not very correct, see below.) I tried to write a quick example but it takes quite a bit of time to write and debug [1] ;-) (I may update this post if I get a working version.)
Lastly, it might be possible to use mmap to eliminate the intermediate buffer at all. I think the actual performance gain with and without mmap can vary much and you need to profile it, but it's worth considering.
[1] I realized that there is no bytewise shift operator in x86 SIMD (_mm_srli_epi8). Oops!
---
Added later: So the following is my crude AVX2 implementation. It is 20% faster than the bytewise version and 10% faster than the original SSSE3 version. It is not that satisfactory though because it had to do a lot of unpacking to get bytes into correct positions (I'm not a great SIMD coder after all).
Ugh, you were correct. I did copy and paste your code to my testing framework and it instantly crashed at that time, but it seems that I put a wrong offset to the output. The resulting code was slightly faster (by 2--4%) than my AVX2 code.
Cool post! Thanks for submitting. I would never in a million years have thought to try out a polynomial fit for something like this. Unsurprisingly, it was slower, but still cool to even see you thinking that way at all.
The encoding lookup table is an array of four chars. I'd be surprised if accessing such an array has a detrimental effect on any program.
I also wonder why the graphs say "Encode / Decode", as if you are combining the performance of the encoding function and the decoding. Have you considered separating them?
It would also help reproducibility if you include your compiler, versions, and flags. You mentioned that you've turned off all optimizations, but I wonder why.
"O0" would certainly produces a lot of jumps for the switch. But O2 is certainly going to eliminate those jumps. In fact, with O2, gcc11 seems to produce identical code for switch and lookup table.
https://godbolt.org/z/jdx645MsY