While I applaud the efforts to document the optimization processes, they are fundamentally slow. The easiest and portable optimization would be converting 8 full bits at a time:
Note that the resulting encode_lookup_tbl2 depends on the endianness, but as long as we make the table on demand it can remain portable. Also `ws_out` might have to be aligned for the potability (unaligned memory access can be slow at best and trap at worst) but it is easy to fulfill when those functions are only used internally. In my measurement this alone made it ~3.5x faster.
The next obvious approach is to use SIMD. Pretty much every modern CPU gives you 16-byte shuffle, which can be used to convert 4 bits at a time. (EDIT: This turned out to be not very correct, see below.) I tried to write a quick example but it takes quite a bit of time to write and debug [1] ;-) (I may update this post if I get a working version.)
Lastly, it might be possible to use mmap to eliminate the intermediate buffer at all. I think the actual performance gain with and without mmap can vary much and you need to profile it, but it's worth considering.
[1] I realized that there is no bytewise shift operator in x86 SIMD (_mm_srli_epi8). Oops!
---
Added later: So the following is my crude AVX2 implementation. It is 20% faster than the bytewise version and 10% faster than the original SSSE3 version. It is not that satisfactory though because it had to do a lot of unpacking to get bytes into correct positions (I'm not a great SIMD coder after all).
Ugh, you were correct. I did copy and paste your code to my testing framework and it instantly crashed at that time, but it seems that I put a wrong offset to the output. The resulting code was slightly faster (by 2--4%) than my AVX2 code.
The next obvious approach is to use SIMD. Pretty much every modern CPU gives you 16-byte shuffle, which can be used to convert 4 bits at a time. (EDIT: This turned out to be not very correct, see below.) I tried to write a quick example but it takes quite a bit of time to write and debug [1] ;-) (I may update this post if I get a working version.)
Lastly, it might be possible to use mmap to eliminate the intermediate buffer at all. I think the actual performance gain with and without mmap can vary much and you need to profile it, but it's worth considering.
[1] I realized that there is no bytewise shift operator in x86 SIMD (_mm_srli_epi8). Oops!
---
Added later: So the following is my crude AVX2 implementation. It is 20% faster than the bytewise version and 10% faster than the original SSSE3 version. It is not that satisfactory though because it had to do a lot of unpacking to get bytes into correct positions (I'm not a great SIMD coder after all).