Hacker News new | past | comments | ask | show | jobs | submit login

Anyone who has worked with redis at scale knows exactly how the persistent queue approach of Kafka is far superior for any robust system that relies on the queue for truth and scales out (aka not your tiny crud app).

And Kafka is hard to deploy ? Please... If you deploy a redis cluster without knowing the pitfalls of distributed systems, you'll have the same problems as with Kafka. Also, soon enough you can deploy brokers over jbods without ZK, what will be the argument then I wonder...




Regardless of complexity, what I find interesting in the deployment of Redis for this use case is the following: if for the use case the consistency provided by Redis is sufficient, or even more, when a relaxed level of consistency and durability are enough for the business requirements, being Redis an in memory system and because of the specific design of the Stream inside Redis (a radix tree of blobs), you get a very high number of operations per second per process, with a memory usage which is very compact compared to the other data structures. This allows to scale certain use cases using a small percentage of the resources needed with other systems.

About the use cases where strong guarantees are needed: for instance Disque (http://github.com/antirez/disque) provides strong guarantees of delivery in face of failures, being a pure AP system with synchronous replication of messages. For Redis 4.2 I'm moving Disque as a Redis module. To do this, Redis modules are getting a fully featured Cluster API. This means that it will be possible, for instance, to write a Redis module that orchestrates N Redis masters, using Raft or any other consensus algorithm, as a single distributed system. This will allow to also model strong guarantees easily.


That's covered in the article:

"Pricing for a small Kafka cluster on Heroku costs $100 a month and climbs steeply from there. It’s temping to think you can do it more cheaply yourself, but after factoring in server and personnel costs along with the time it takes to build working expertise in the system, it’ll cost more."

If your project can afford Kafka, use Kafka. This article is about achieving the same pattern in any project that already has Redis.


I think Gepsens argument was that your not achieving the same pattern, rather your achieving a subset of the pattern where several important considerations in dealing with a distributed system have been ignored and/or are unsupported. Once you do add those features back in then you basically arrive at Kafka and all the complexity involved with standing it up, so Redis isn't really an Apples to Apples comparison. While it certainly might work for certain use cases (and indeed appears to do so), many of the use cases that Kafka is necessary for would end up being just as complicated to setup Redis to support.

I think this is largely a rephrasing of one of the findings of dealing with NoSQL vs. Relational DBs, which is that for a subset of problems relational DBs have been used to solve in the past, NoSQLs are perfectly capable of replacing them, but not every problem a relational DB handles can be handled by a NoSQL without investing significant time and effort layering more complex systems on top, at which point you've basically implemented a poorly optimized ad-hoc relational DB. In this case, Redis, with the new stream type is capable of handling a subset of problems that have previously required Kafka to solve, but it isn't itself a replacement for Kafka in all situations since there are several important features of Kafka that aren't available in Redis.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: