Thanks in advance!
CPU registers (8-32 registers) – immediate access (0-1 clock cycles)
L1 CPU caches (32 KiB to 128 KiB) – fast access (3 clock cycles)
L2 CPU caches (128 KiB to 12 MiB) – slightly slower access (10 clock cycles)
Main physical memory (RAM) (256 MiB to 4 GiB) – slow access (100 clock cycles)
Disk (file system) (1 GiB to 1 TiB) – very slow (10,000,000 clock cycles)
Remote Memory (such as other computers or the Internet) (Practically unlimited) – speed varies
Hmm, there is something very wrong here. I'll try and explain in a blog post.
It probably doesn't matter though..
And yes, Redis is very fast and you gain a lot when using it compared to just a Hash in the same process.
The bigger win, IMHO, is that you gain flexibility: Since Redis (or whatever) is decoupled from you process it can run on another processor, another machine or perhaps run on many machines etc..
Not sure how in-process cache would work in node, being async and all, but yes in-process is faster. But then you have to think about stuff like:
- how do you avoid loosing everything when node crashes / restarts?
- what if another process needs to read write to the cache?
- what if you need more memory than a single machine provides (probably not going to happen).
- implementation bugs
I get the feeling you are kind of anti-Redis and I don't get why? Redis is a very cool project and could be useful for a lot of things.. It's not Redis' fault some people misuse it..
So why would you compress something that you can only decompress if its recently-reused?
How would you do mailinator with your strings in redis - and taking O(n) calls to redis to recover them to decompress an email where n is the number of lines (or consecutive lines, granted) in the email?
Yeah, you actually do.. sorry.
"So why would you compress something that you can only decompress if its recently-reused?"
Not sure I understand your questions, and I've just started looking at Redis. But I guess you could do it the same way, but the added latency may make it infeasible. But the better answer is probably that you don't: You would modify the implementation to fit Redis' (or whatever) strength and weaknesses.
You really can process mailinator- quantities of email with a simple Java server using a synchronized hash-map and linked list LRU and have some CPUs left over for CPU-intensive opportunistic LZMAing.
Trying to do it with IPC TCP ping-pong for each and every line though; well I'm not sure you could process mailinator quantities of email within any reasonable hardware budget...
Luckily you have a chance to see the error of your ways :)
But you have to remember that most people can't have important data in just one process; it's going to crash and your data is gone. The LMAX guys solved this in a cool way, but I wouldn't call it easy: