Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What the parent poster described isn’t what makes Kafka’s “exactly once” semantics work. It’s the use of an idempotency token associated with each publication, which effectively turns “at-least-once” semantics into effectively “exactly once” via deduplication.




We used Flink and S3 compatible backends (for the Flink state) to ensure exactly once processing.

I didn't say Kafka magically solves these problems for you, but it was required for the scalability we needed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: