Can you explain how you merge a compressed sorted stream with the sorted chunk, to obtain a bigger compressed stream (without extra memory, or quadratic complexity) ?
I can see a solution with memory usage :
size of( compressed sorted stream ) + size of( compressed sorted chunk ) + size of(compressed stream merge output)
This would work but it means you need to fit your compressed sorted stream in less than a single MB instead of 2MB (and I'm ignoring memory fragmentation problems)
There are a bunch of ways to do this. One of the more straightforward is to use a temporary buffer big enough to hold both the new values and the existing stream, and start by shifting the existing stream to be right aligned (using memmove or the equivalent). Then read bits from the old stream while writing bits into the new stream. Using Elias-Fano coding (which I believe is an instance of Golomb coding), the write position will never overtake the read position, so the merge can be done in-place.
In my solution though when the extra temporary buffer is very small,in the worst case, we need to make sizeof(compressedStream)/sizeof(temporaryBuffer) memmove which can potentially make the algorithm quadratic.
Thanks, I got it. The main trick is the standard "interview" trick of writing from right to left.
I'm not sure your solution work though without memory moving, because in your second step if the values of A are larger than the values of C, the growing B will hit the non decreasing set A.
To convince my self, I wrote for myself the following sequence.
I can see a solution with memory usage : size of( compressed sorted stream ) + size of( compressed sorted chunk ) + size of(compressed stream merge output)
This would work but it means you need to fit your compressed sorted stream in less than a single MB instead of 2MB (and I'm ignoring memory fragmentation problems)