Hacker News new | past | comments | ask | show | jobs | submit login

Ah yes, that age old showdown: the fundamentals of distributed systems vs a shiny marketing page.

It doesn’t make what they say any more true. SQS is at least once delivery, and the fact you think otherwise is down to great marketing and maybe a bit of Dunning–Kruger effect.




And I posted above how you can ensure “at most” once - send it to a dead letter queue by setting the retrycount to 1...

Again, understanding how things work takes more than just reading a single web page.


> exactly once delivery

> done

> SQS is at least once delivery

> And I posted above how you can ensure “at most” once

So, uh, what happened to exactly once?


SQS is only meant for multiple publishers and a single subscriber or set of subscribers that do the same thing. A message in an SQS queue is either delivered or in the queue. SQS operates on a strictly polling model.

Meaning you’re not going to put a message directly in an SQS queue and have it processed by multiple types of subscribers.

Are you expecting some generic system to make sure your subscribers are up and running and polling?

Better question is, do you know what SQS is and have you ever done anything with it?

This is one definition of “exactly once”.

https://www.confluent.io/blog/exactly-once-semantics-are-pos...

Exactly once semantics: even if a producer retries sending a message, it leads to the message being delivered exactly once to the end consumer.

This is done with AWS FIFO queues by:

Unlike standard queues, FIFO queues don't introduce duplicate messages. FIFO queues help you avoid sending duplicates to a queue. If you retry the SendMessage action within the 5-minute deduplication interval, Amazon SQS doesn't introduce any duplicates into the queue.

So exactly once is handle by the producer, the queueing service, and the subscriber.


> If you retry the SendMessage action within the 5-minute deduplication interval, Amazon SQS doesn't introduce any duplicates into the queue.

But what if you try to send and at just about the same time the network fails for 6 minutes? The producer then has the option to try to send again, which may produce a duplicate, or, to assume the first send worked, which may mean the message is lost.

In a system that supported true "exactly once" guarantees, the producer wouldn't have to pick between a duplicate message or a lost message, regardless of the length of network outage.


If you try to send and the request wasn’t successful, the server would get a connection close and the client would not get a 200 OK.

The server would know that the message wasn’t complete.


I think that's the crux of your misunderstanding here. There are networking issues where one side thinks a request has been sent or received successfully while the other side does not.

In this specific case, if a networking issue prevents the final ACK from being received by the server then the client assumes the connection is closed and the message has been delivered, while the broker will wait for the connection to time out, which I assume is a failure condition.

There are other cases where the client can receive a complete message but the server is unsure that it has, and will time out the connection while the client continues processing, assuming everything is fine.

You cannot build something on top of a networking stack that does not guarantee exactly-once semantics at any level (TCP, IP, HTTP, physical) and expect to get exactly once. You can mitigate it, sure, so maybe Amazon is super-dooper sure that these networking conditions won't ever happen.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: