Hacker News new | past | comments | ask | show | jobs | submit | jtwebman's comments login

He tried at the beginning to say only when the DB gets too big should you even think about this but there are things you can do there as well.


There is no “too big” in databases and in particular size of data is not the criterion for deciding on using something like event sourcing. It’s a really niche paradigm that is only ever going to be useful in quite unusual circumstances. Most of the time people don’t need it, and most of the time when people do it, they find their immutable event source is very inconvenient for lots of the normal things you want to do with your data, so they end up doing something like CQRS[1] (ie having a database as well). This is one of those Martin Fowler[2] type things that looks good on a whiteboard but most people would be better off avoiding most of the time.

[1] https://en.wikipedia.org/wiki/Command_Query_Responsibility_S...

[2] This Martin Fowler, https://en.wikipedia.org/wiki/Martin_Fowler_(software_engine... not this Martin Fowler https://metro.co.uk/2023/11/06/eastenders-star-reveals-why-m...


The way I see it, either your business domain requires querying over a large amount of data, or it doesn't.

If an application allows someone to be able to enter let's say an order number from anywhere in the world from the last 10 years and be able to find the order, there is no magic - some server out there is going to have to scan a huge amount of data to find a match.

Tricks such as indexes, partitioned tables, etc can be employed, but those tricks have nothing to do with event-sourcing and are independent of it.


> Tricks such as indexes, partitioned tables, etc can be employed, but those tricks have nothing to do with event-sourcing and are independent of it.

You might want to use different tricks in different situations. Different situations means different services, and different tricks means different storage/query technologies.

So how do you get your data into three systems - and more crucially - keep them in sync? Webhooks? Triggers? Some bidirectional sync magic app that claims to beat CAP?

Just use event-sourcing (append-only, disallow modification) and the multiple systems will stay in sync as long as they know how to process one more message.


Agreed with your points but this article seems to present event-sourcing as a replacement for your database(s) and even makes claims about saving storage space, thus at least hinting at not using databases anymore.


It's a replacement for your source-of-truth, not your database(s). Although you're right about the article not explicitly mentioning slurping the events back into a DB. I suspect the reason is that there are plenty of articles which explain how to event-source from greenfield, but this is the first one I've seen which focuses on existing brownfield relational data - see the title.

> makes claims about saving storage space

I don't think that was the right reading about saving storage space:

> We’ve been trying to optimise the storage size; we’ve made some sins of overriding and losing our precious business data

I think it's his strawman RDBMS developer who optimised for saving storage space, and lost business data as a result. The suggested approach is:

> We can optimise for information quality instead of its size.


It was all backend in 1998.


PHP running as lambda functions^W^WCGI. Those were the times.


Sorry DLQs make it easier to do those alerts where a human needs to look asap at something. Not sure they can be gotten rid of, but maybe you call them something else.


Inngest engineer here!

Agreed that alerting is important! We alert on job failures, plus we integrate with observability tools like Sentry.

For DLQs, you're right that they have value. We aren't killing DLQs but rather rethinking them with better ergonomics. Instead of having a dumping ground for unacked messages, we're developing a "replay" feature that lets you retry failed jobs over a period of time. Our planned replay feature will run failures in a separate queue, which can be cancelled at any time. The replay itself can be retried as well if there's still a problem


This isn't cheating in my mind. They use tools available to them just like they will have on their job. I would ask clarifying questions and if they understand what they are saying then why does it matter that the memorized the answer vs memorized how to find the answer


They own the IP they can do what they e want. Seem like every few years some creator freaks out because they didn't like what you did and take it down. It is in their right to do so. Free Use can only be decided by a judge in the US. Until more band together a fight then nothing will change.


Fair use means exactly the opposite of what you're saying. Companies who own IP can't do whatever they want. If use of a copyrighted work is transformative, it falls under fair use. It's true that it's up to the courts to decide whether specific works are transformative, but that doesn't justify game companies abusing individual content creators who don't have the financial means to defend themselves.


> If use of a copyrighted work is transformative, it falls under fair use.

Not necessarily. Being transformative may weigh in favor of fair use, but it isn’t automatically fair use.


This is a project managers job not a engineering managers job.


OP said “product manager” which is also wrong. Funny how these three very different roles constantly get mixed up by smart engineers.


Maybe because from the POV of engineers, they all look kind of the same and half of it doesn't make sense.


The bad version of all three look exactly the same. And the bad version of all three are actually worse than nobody at all. Yet, high management is completely convinced those people are essential so they'd rather keep a bad professional there than get rid of them.

Anyway, the good version of those three are completely different, and add a ton of value on very different places. I think they are rare enough that many people don't ever meet one of them.


It would make things a lot easier if it was all one big AI. Lets refer to it as The company.


Titles--bloody titles! The AI doesn't care what its title is.


Frameworks come and go. I rather give the credit to the creator not the company. It solved there problem and open sourcing it benefits them more not the community. Now they don't need to train new Facebook developers on some internal framework. Now 1000s of engineers move their framework forward vs only the ones they pay.


Yep sell shovels don't do the mining


The no VM thing I think is a downside. Sure raw performance is nice but not having a process eat all system cou resources is the true beauty of Erlang and Elixir to allow you to self heal. Are you handling that with this library?


Can you give an example of what you're referring to? I don't know of anything limiting memory / cpu / etc in Erlang at least of any individual gen_server. We have the Factory processes which can gracefully loadshed, but that doesn't stop you from having a memory leak.

At least Rust doesn't have a garbage collector, so when the actor is stopped + dropped, it'll cleanup not only it's state but also it's message queue's flushing them so that all memory is released at the time of drop.


Docker and kubernetes are here for that


They’ll give you ways to limit the CPU use of the OS process, but not the individual actor “processes” (Erlang overloads the term), which are opaque to k8s/linux/docker.


Chat GPT can't write code so I am a little confused?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: