Hacker News new | past | comments | ask | show | jobs | submit login

I agree with this a lot. I posted a comment elsewhere in this thread about our use of events, and it boils down to selectively picking the entities we need to be able to reason about past states of, and storing the new states of those, and then deriving views from that state. For many uses we don't even ever need to then explicitly apply state transformations on that to derive a materalized form of the present state - a suitable view is often sufficient. For some we do need to apply transformations into new tables, but we can do that selectively. We still always have the database as a single source of truth, as we're lucky not to need to scale writes beyond that, which simplifies things a lot.

What it gives us is ability to rerun and regression test all reporting at point in time for those data sources we model as events, and ability to re-test all the code that does transformations on that inbound data, because we don't throw it away.

"Our" form of event sourcing is very different from the "cool" form: We don't re-model most internal data changes as events. We only selectively apply it to certain critical changes. A user changing profile data is not critical to us. A partner giving us data we can't recreate without going back to them and telling them a bug messed up our data is. For data that is critical like that, being able to go back and re-create any transformations from the original canonical event is fantastic.

And as long as there is an immutable key for the entity, rather than just for the entity at time t(n), we can reference from non-evented parts of the system to either entity at time t(n) or entity at time t(now()) trivially, depending on need.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: