

The Nitty Gritty of In-Memory Computing - nkurz
http://www.theplatform.net/2015/09/07/the-nitty-gritty-of-in-memory-computing/

======
dan31
Compressed representation isn't a trade-off but a win-win very often. When you
do in-memory data structures like graphs or automata, you can perform searches
times faster if you go for packed representations. What first comes in mind
are language-optimised collations [http://ow.ly/RVxwa](http://ow.ly/RVxwa),
lossless codes
[https://en.wikipedia.org/wiki/Entropy_encoding](https://en.wikipedia.org/wiki/Entropy_encoding),
[https://en.wikipedia.org/wiki/Delta_encoding](https://en.wikipedia.org/wiki/Delta_encoding),
lightweight LZ
[https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Wel...](https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch).
One still has to decompress data chunks (pages, vertices and alike) to act
locally while traversing, though cache lines utilisation is improved by more
data loaded to process at once, resulting in better speeds. And you can go
even faster if you can traverse compressed representations directly without
unpacking, which is sometimes possible. General receipt: always shrink data to
go faster, and always verify result in real scenarios.

