Can it rebalance partitions? And delete crap from topics?
Every time we have to call up our ops guys for this; it’s like “deep breath, first utter some indescribable magic aws iam nonsense and somehow get an ephemeral shell on some rando bitnami Kafka image’s ./kafka-topic shell scripts to work over the next few hours” and ultimately succeed but with deep regrets
If you have to manually delete data from topics, you are doing Kafka wrong. The whole point of it is high speed data throughput, so that something automated does it for you downstream.
A message queue has one (1) job: to keep track of the already-processed message pointer.
Which Kafka doesn't do. So you either store everything forever (lol) or you write some sort of broken half-baked solution for a message queue on top of Kafka. (Broken and half-baked because you're not going to achieve fault tolerance or consistency without re-implementing the storage layer.)
Now, you're just gonna say that "Kafka isn't a message queue". Well, I don't need half of a solution that isn't even a message queue. Nobody needs that.
Considering that the core behaviour of message queue (like RabbitMQ, IBM MQ, etc) and streaming log system (like Kafka) is wildly different, I'd rather claim that the very idea of using Kafka as message queue is the core problem.
You can use a distributed log to approximate a dedicated message router (which is honestly what "message queue" systems actually are - the queue is an artifact of limited capacity not required behaviour) but such uses are going to be wrong 9 out of 10.
OTOH if you want multiple readers observe same event stream, including across time dimension, not just receive messages, then message queue systems are going to be wrong solution and systems like Kafka are going to be good options.
Both have their pros and cons, both have their uses, both are shit solution when you need the other.
The more important question is "which of those, if any, you actually need".
Hi! kaskade is more like AKHQ than Cruise control.
We use strimzi by default, so we deploy cruise control with kafka, and it takes care of rebalancing the data across the nodes. Also you can deploy it without strimzi.
Delete crap is more complicated, usually with kafka-delete-records (this is king of new I think). The problem is the offsets. By general rule you should not delete data from topics
There is also the deleteRecords API specifically for this. It's easier than the retention shrink -> increase dance, as it is a single API call and retention does not kick in immediately. The log segment must roll for retention to apply, either due to size or time.
In Textual, I wanted to write a logging app with a big data table and filtering, and I'd hoped that it would be a bit more straightforward to write an optimized one
I guess it would be a nice addition to have some kind of FilterableDataTable with history, filtering, caching, and fast rendering
I guess you probably developed something like that for this tool, perhaps you could share it in Textual, or some kind of "textual widgets extension lib"?
Hi! I did not implemented something like that, but I can say this year textual has a great set of features and a nice community willing to help and share. I totally recommend textual.
Does it support Protobuf deserialization via Schema Registry? This is basically where every other tool falls apart. Kafka UI just added support very recently but kcat falls apart.
Hi! it validates if the message was (or not) generated for schema registry, it checks if the message has the schema registry magic bytes (this bytes has the schema id). so, it deserializes messages with or without schema registry.
This is the drawback: you have to generate the descriptor first.
1) download the schema from schema registry:
http :8081/schemas/ids/<my-id>/schema
2) generate the descriptor with the schema:
protoc --include_imports --descriptor_set_out=my-descriptor.desc --proto_path=. my-schema.proto
Perfect! I had tried this last night before your comment (I saw these instructions in your README) but I was trying to include `-s url=` and `-s basic.auth...` which appears to force you into using `-k avro` or `-v avro`. But, once I removed the Schema Registry config options, I finally got the TUI showing up and was able to read messages! Thank you very much. Nearly all of my topics are Pb-encoded so I'll have to throw something together to get all the descriptors and perhaps abstract a bit. I love seeing the different uses of Textual coming out and this is no different; this is a big improvement from kcat IMO.
Hey ddoolin, It's so nice to read this, yes! kaskade ignores the schema registry metadata for json and protobuf messages generated for schema registry, you do not need to provide url (only in case of using avro).
The only catch (in the protobuf case) is to generate the descriptors with protoc, but the good thing is generally we all have them, so maybe is not a big problem.
genuinely surprised that kafka is something unknown, thought it had gained ubiquitous status similar to k8s but maybe that's me walking into a Baader–Meinhof effect
All other things being equal, Id prefer to use a tool written in a language with which I’m already comfortable. It’s nice to find a tool that meets my requirements _and_ I can easily read the source code.
Every time we have to call up our ops guys for this; it’s like “deep breath, first utter some indescribable magic aws iam nonsense and somehow get an ephemeral shell on some rando bitnami Kafka image’s ./kafka-topic shell scripts to work over the next few hours” and ultimately succeed but with deep regrets