Hacker News new | past | comments | ask | show | jobs | submit login
Too small to Kafka but too big to wait: Really simple streaming in Clojure (dataissexy.wordpress.com)
75 points by zonotope on Nov 24, 2018 | hide | past | favorite | 27 comments



This post completely ignores fault tolerance. The reason Kafka requires you to have three machines isn’t because it is somehow magical, but because that allows it to effectively recover from machine-level failure without data loss (Depending on how your producers are configured, of course).

Saying this approach is even “a bit like Kafka” is incredibly weak - if he is trying to do semi-durable message queueing, fine, but instead he consistently attempts to pitch his solution as mostly on par with Kafka. In the end he has created Kafka minus the utility.


Exactly, and if you /really/ wanted to, you could run a one node Kafka "cluster" using data replication of one instead of three for topics. I totally get the fun of creating something from scratch, but you introduce far more complexity and burden to your stack when you create something that needs the durability of a file system.


It does ignore fault tolerance yes I agree, because right now I don't need it. When you're bootstrapping this stuff you want to be as lean as you can. The plan is to move to Kafka, which I do use a lot btw, when there's a case to move to that level of throughput. If I had my way now I'd use Kafka, my wallet on the other hand disagrees.

I know the reason for needing three machines for a Kafka cluster. And I'm certainly not pitching Durable Queue as a Kafka alternative unless, as in my case, the throughput will be so low in the initial stages as not to warrant a full Kafka cluster.

I could have run Kafka on a single node.....

The "a bit like Kafka", weak yeah it is but the solution posted presented a fairly quick way to establish a queue and decouple messaging from the running application, that was the whole point.


Ever looked at Azure Event hubs? We’re usung this as a Kafka alternative because it’s way cheaper actually (for xx million number of messages with a throughput of 4 MBs per second its 75 €) then setting up a full Kafka and keeping it running? Or do you see disadvantages on that technology? I’m not trolling or so, just curious.


No I haven't but I appreciate the headsup. It's been a few years since I even looked at anything on Azure in all honesty.


As http://tech.trello.com/why-we-chose-kafka/ points out, Redis Streams would be another alternative. It seems still underdocumented though as it's a recent addition.

I created https://github.com/antirez/redis/issues/5582 asking what should be a basic, necessary, straightworward question, still no luck


Hello vemv, we have time complexities in every stream command, and the reason there is no latency documentation is because it follows the Redis normal latency, that is, like blocking lists or Pub/Sub work: as soon as something is available for consumers waiting, it is immediately sent to the socket of such consumers. So you can expect extremely low latencies without any differences compared to other Redis commands. However I'll try to reply to your question in more details in that issue. Btw what is bad is that you did not find your answers in the Streams intro doc. I'll update that too.


While the documentation gets updated, you can think of it as similar to using a Redis list, wish LPUSH and RPOP.


I like this essay but I think it is very strange that he doesn't link to the talk that Zach Tellman gave in which Tellman talks about the origins of Factual's Durable Queues:

https://www.youtube.com/watch?v=1bNOO3xxMc0

Factual's Durable Queues are a fairly minimal queue effort that is useful in cases where you need some minimal way of handling back pressure, because a service downstream has failed. In his talk, Tellman talks about the fact that he needed to write a lot of data to AWS S3, and sometimes S3 stops accepting writes, so he needed an easy re-try mechanism.

I used Durable Queues everywhere in my code and find it very handy. It's useful when you need a very minimal queue that still has the backing of a file system.


durable-queue was one of my favorite pieces of Clojure kit.


I like to use Akka in places where I can't or don't want to introduce operational complexity in the form of external messaging infrastructure.


Comparing to Kafka and Kinesis (which aren't even in the same category) feels like a straw man. The author needs durability, but one disk of one machine satisfies that need? I'm skeptical.


I was under the impression that the original/core Kinesis is Kafka-as-a-service (though big and tangential features have been added since).


Ok, yeah, their marketing speak fooled me. I thought they were more like Spark, BEAM, etc.

I still think the "Kafka or /tmp" framing of the article is a straw man, though. There's plenty of options in between depending on the requirements.


Prohibitively expensive Kafka as a service.


<<but one disk of one machine satisfies that need>>

Right now it does, later on it won't. Then it moves to Kafka.


The good thing of people using for small problems overkill queue systems like Kafka or RabbitMQ is that they usually have time to react and correct the mistake after burning lots of engineering resources. The sooner they make the mistake, the better.


Is something like RabbitMQ really overkill though? When I started out with a project that needed a durable queue and some inter-process communication I bought it as a SaaS, within 90 minutes the application was happily sending/consuming messages on a durable queue.

I mean, is PostgreSQL really overkill just because a CSV file can also do some things?


I agree with this, RabbitMQ is a standard part of the Django stack (with Celery) and it's probably been the easiest component to manage in my experience. I have a RabbitMQ docker image that took 10 mins to stand up, and has been running in GKE for 2.5 years now without any downtime; it Just Works (TM). We don't drive it hard in terms of messages/second, it's just there delegating work from the web server to workers.

I strongly dispute the "overkill" narrative for this component, thinking about it like a SQL database is the right framing. It's really easy to set up for a simple app, and you can turn on features to scale when you need them. But you really don't pay much of a cost to use this component no matter how small your project is.


The problem with RabbitMQ is that, you can't rewind back. It's good for messaging. But if you need log-like abstraction you need Kafka which requires zookeeper to start with. I would appreciate single node message brokers (very much like RabbitMQ) that don't do any fault-tolerance and just focus on providing log abstraction.


I've seen the ability to rewind cause problems in practice (accidental rewinding causing redelivery of old messages). It's something to be used cautiously, only if you really need it.


It depends on your definition of "small problem".


I've been using redis streams for a similar problem.


If only AWS would allow Lambda methods to handle SQS FIFO queues.

That would be the serverless holy grail event solution that scales from small to large.


Azure functions do have the ability to be triggered by Azure Storage queues.


Why is this downvoted?


A lot of people have said this doesn’t stand up to Kafka/RabbitMQ, but how does it compare to Amazon’s SQS?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: