Hacker News new | past | comments | ask | show | jobs | submit login

I’ve worked with on prem and now AWS. Even being on AWS I think the first part of the article is great. Nice visual and description. When on prem, scaling was an issue during event storms. But by then we at least had enough to know what the problem was.

Scaling the number of messages is no longer an issue for us using SQS but size is. Basically there are still constraints. We then need to pass references. “Details created etc find it here”. On prem we could dump around 25 MB into message without issue and could go to 50 MB but it wasn’t safe.

I get your point on the queue and pub/sub not being the same. Just to note we do a lot of hybrid and expect others do to. Publish to a topic and bridge to queues. Fan out is easy and gives choice. I don’t know google’s version, with TIBCO EMS this was easy to manage and was clear to everyone. If you wanted to listen to everything on prod.alerts.* you could or if you want to process prod.alerts.devices.headend you could just queue it and process all.

We use queues like storage for long outages so that senders don’t have to change anything. Not a great use but people were sure happy to know we could help by holding all the events while they could deal with their mess. Never got close to any limit when doing this on prem.

Never used it but isn’t the idea of Kafka to hold all events like a database? Love the idea. Seems so lazy and useful at the same time. Now that I write this I can see the danger too. You become a transaction system of record. Ugh. That’s someone else’s problem ;).




I do not know, if SNS is actually based on Kafka, but Kafka has LogRetention, which is imho enabled by default. So your events disappear after some time. So no, holding events like a database, i would not agree. Database'ish :) Especially with all the layers on top, like SQL.

Now I wonder if Athena can actually run on SNS topics, hmmm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: