Hacker News new | past | comments | ask | show | jobs | submit login

I skimmed the article but I imagined they were using it as a secondary data store. I think they want to it to be durable in the sense that even if the events are already consumed they can still play them back to reindex elastic search (which is a thing you need to do periodically).



"With the log as the source of truth, there is no longer any need for a single database that all systems have to use. Instead, every system can create its own data store (database) – its own materialized view – representing only the data it needs, in the form that is the most useful for that system. This massively simplifies the role of databases in an architecture, and makes them more suited to the need of each application."


Fair enough. It seems like it still ought to be able to burn the kafka+elasticsearch world down and submit everything to kafka with such a setup (and thus elasticsearch). I would certainly not sleep very well at night if I could not.


And then you end up with a different flavor of data store for every team, complete with its own idioms and (probably duplicative) business logic.

Unless their devops/SRE staff is up to the task this "architecture" is a nightmare waiting to happen.


> I think they want to it to be durable in the sense that even if the events are already consumed they can still play them back to reindex elastic search (which is a thing you need to do periodically).

That (replaying if needed) is exactly what Kafka allows you to do, unless I misunderstood what you wrote.


No, you understand. Just not sure what failure mode would make kafka a bad store, especially if all its logs are created from other services.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: