
Redis 4.0 - fs111
https://groups.google.com/forum/#!msg/redis-db/5Kh3viziYGQ/58TKLwX0AAAJ
======
StevePerkins
_" 9) Active memory defragmentation. Redis is able to defragment the memory
while online..."_

I'm so amazed that this is a thing.

~~~
ihsw2
Memory fragmentation is largely related to the allocator you're using (ie:
glibc malloc, jemalloc, tcmalloc) and previously it was up to the OS to manage
this (ie: freeing up unused memory).

Now with active memory defragmentation things are a bit more pleasant,
specifically with high delete load actually freeing up unused memory in a
timely manner without impacting performance too much.

Previously, to fully recover unused memory, you would have to restart the
Redis server. Obviously this is not feasible but when Redis is using >50% more
memory on a 120GB machine than it should then you will have to consider this
an occasional housekeeping option -- now, as mentioned, this ridiculous task
is no longer necessary.

~~~
xxs
120GB on a single thread, yeah. Redis has been abused for quite some time but
120GB is way out of reasonable reach for current CPU arch if you use a single
core only (even if you switch off hyper threading)

~~~
iagooar
Call me a slowpoke, but I wasn't aware that Redis was single core!

~~~
lozenge
It makes it great for storing distributed locks, semaphores, and thanks to Lua
scripting it can store/update cache invalidation lists - eg
[https://github.com/Suor/django-cacheops](https://github.com/Suor/django-
cacheops)

------
Dowwie
@antirez: Congrats! Are you going to modularize disque now that v4 is ready?

~~~
antirez
Yes, one of the top items in the Redis 4.2 roadmap is to port Disque as a
Redis module, so the steps are: 1) Implement a Redis Cluster modules API (I've
already some draft design, basically there are low level stuff to enlist
nodes, broadcast messages, send one-to-one messages, and have callbacks called
when we receive a message), and then port Disque as Redis module on top of
such API, so that we can both validate that the Cluster API is good enough
with a real project, and can make Disque a first citizen of the Redis
ecosystem without all the code duplication, that in the end, made the project
extremely hard to sustain. Thanks.

~~~
ericfrederich
I look forward to Aphyr scrutinizing it. No disrespect to you, his posts are
always fun to read and show how hard it is to get this stuff correct.

------
dvirsky
If you want to try out some of the modules already available:
[https://redis.io/modules](https://redis.io/modules)

------
frou_dh
Maybe with the new modules support there will emerge some explicit way to do
robust worker/job queues? So you don't have to remember your BRPOPLPUSH/LREM
dances (or whatever it is) just so.

~~~
zedpm
Antirez has talked about moving Disque to a Redis module, which should solve
that problem neatly. I'd love to drop our RabbitMQ dependency and move
queueing into Redis.

~~~
juliensab
Why?

~~~
zedpm
One less service to manage, plus RabbitMQ's options for dealing with network
partitions are all terrible. If you want to run a RabbitMQ cluster, you don't
have any option that avoids data loss.

~~~
Others
Can you even avoid data loss with partitions in a AP system like RabbitMQ?

~~~
zedpm
RabbitMQ is not strictly an AP system. Its clustering functionality supports
several different modes for handling partitions (all of which allow for data
loss) and is CP. Federation, which you should use if your network between
nodes isn't super reliable, is AP.[0]

What I'd like to see is clustering with support for merging data between
nodes. You'd probably end up with duplicates, which I'm OK with, but you'd
avoid the data loss that happens with other modes like Pause Minority, where
messages get dropped on rejoining a cluster.

[0]:
[https://www.rabbitmq.com/distributed.html](https://www.rabbitmq.com/distributed.html)

------
brango
Redis Cluster connecting to nodes via DNS instead of IP would vastly simplify
deployment on K8s.

~~~
iagooar
How do you solve this problem? Can you use headless K8S services for that?

~~~
brango
You can use a StatefulSet which gives a fixed hostname per pod. Then you can
have a sidecar pod that monitors the cluster, discovers the IPs of all pods,
and uses redis_trib to create and maintain the cluster. It's a hassle to make
it robust though.

~~~
iagooar
Don't see why you would do that, instead of a headless service.

------
infocollector
Is there a Redis PPA for Ubuntu 16.04 that is supported by the redis team?

~~~
ing33k
Redis is of the the very few applications , that I install manually almost
every time.

trick is to use the utils/install_server.sh

~~~
dvirsky
hey, I wrote that little script years ago, glad to hear someone (besides me)
is still using it! :)

------
Hates_
Anyone know when we might see this on AWS (Elasticache)?

~~~
actuator
I don't like the Elasticache offering of Redis at all. It is severely limited
if you are not just using Redis as a cache. There is no way to scale up/down
the size of the node without putting it down(this is possible if you run it on
EC2 as well as you have access to commands related to replication that are not
allowed on Elasticache). We have had master restarting instead of failing over
to slave and in the process coming back with no data. Even the snapshot is
only taken once in a day.

------
indeyets
LFU policy sounds really interesting!

~~~
antirez
Thank you, there is more info in this blog post:
[http://antirez.com/news/109](http://antirez.com/news/109)

~~~
leesalminen
Awesome write up. I can't wait to try out LFU with our usage. I have a
suspicion that our cache hit ratio will see a nice bump.

I've said this before and I'll say it again...thank you for all you do!

------
jokoon
It seems it even supports nearest search for lon/lat points by default...
Quite nice since most RDMS don't even support it be default.

Although I'm curious to know what algorithm it uses for nearest search, it
doesn't talk about it in the doc.

I don't really understand what redis should not be used for, I guess it's not
for complex queries? Conventional RDMS really seem to belong to the hard disk
drive age. So the difficulty resides in having well designed data schemes.

~~~
simonw
I adore redis, I use it for tons of stuff, but I still like to keep my single-
point-of-truth for critical data in an ACID, transactional relational
database.

I frequently denormalize that data INTO redis so I can answer certain classes
of queries quickly. I'm also very happy with redis for caching, rate limiting,
inter-process-communication, distributed locking and tons of other fun use-
cases.

------
anirudhgarg
Any news on when this will be on Azure Redis Cache ?

[https://azure.microsoft.com/en-
us/services/cache/](https://azure.microsoft.com/en-us/services/cache/)

~~~
tracker1
Wouldn't expect it in anything less than 3-6 months tbh... depends on how
anxious MS is to git this into place.

------
HankB99
Interesting coincidence (or maybe not...) Redis was discussed on the Floss
podcast that I listened to earlier today and now I have an inkling of what
Redis is. My first exposure to Redis was to ponder where it came from after I
tried to run Gitlab on my puny (J1900, 4GB RAM and spinning rust) file server.
It was spectacularly non-performant with most page loads timing out. I suppose
it was because Redis had insufficient RAM for operation. Redis may be scalable
toward large busy systems but seems less so in the other direction. I thought
it would be cool to have a real Git server but this one was not it.

During the podcast the Redis guy mentioned that 4.0 was on the verge of being
released.

~~~
notamy
The issues you had may have come from GitLab not having enough resources
available to handle everything, as opposed to Redis itself; anecdotally, I
have an application with a couple thousand keys in it that only uses ~5MiB of
RAM.

~~~
pfooti
Yeah, I'm in the same boat - I have a redis I use for caching auth tokens and
some other values for a small service, and it doesn't take much ram at all.
(5-10 MiB)

------
fapjacks
I've said it before and I'll say it again: I am _so_ smitten with antirez (and
redis)! One of my favorite projects for sure.

