The principal difference between 'at most once' and 'at least once' is whether a sender re-tries when it is unsure if the recipient has received the message.
If the recipient's ack never makes it back, then a sender cannot know whether they actually received the message or not (the two-generals problem).
So this hypothetical middleman will receive a packet, check that it's not a duplicate, and forward it to the recipient it's proxying for. How will it know that the recipient has actually received it?
If the receiver doesn't ack the message in some way, which causes your abstraction to re-transmit the message again, then it exhibits 'at least once' behavior.
If it the abstraction only ever forwards the message along once and doesn't care whether the recipient acknowledged it or not, then it exhibits 'at most once' behavior.
As a more concise answer - 'exactly once' delivery is impossible because you can't know if the recipient actually got the message.
If you assume a perfect communication channel, then I agree the problem is trivial, but I challenge you to find such a channel! Even on the same machine, interprocess communication can fail in all sorts of fun ways.
> So this hypothetical middleman will receive a packet, check that it's not a duplicate, and forward it to the recipient it's proxying for. How will it know that the recipient has actually received it?
It seems like the answer is in the first part, the "check that it's not a duplicate".
Implement at-least-once but with a unique token to identify the request, and the receiver sends back acknowledgement with that token every time it receives the original message, but only hands it off for processing the first time. Stuff this behind library/API so it's hidden to the user and the application code doesn't have to handle it, and... isn't that it?
Yes, but the handoff can fail in the same way (it can't know if the thing it's handing off to actually got it). But the application can also just be resilient to that with idempotent operations and have the handoff be at-least-once.
Everything in a digital system can fail, but by convention (and because it's not so far from the actual truth) some parts are assumed to be 100% reliable when modeling them. If you don't make this assumption, you can't guarantee anything.
But the recipient is not one atomic thing - we're assuming perfect communication between the process/driver/hardware receiving the packets and doing the duplicate detection and the process which wants to receive the message exactly once.
There's still communication happening there, and it can still fail. Buffers fill, processes pause for arbitrary delays which exceed timeouts, etc. Your assumptions based on your model are correct, but your model doesn't include that communication.
But all models have some level of detail they care about, and assuming the computer always works is a perfectly valid model for plenty of cases. It's just not all of them. You'll be able to create real-world cases where this abstraction is faced with a choice of whether to retry or not, and at that moment it will be unable to deliver exactly once.
> we're assuming perfect communication between the process/driver/hardware receiving the packets and doing the duplicate detection and the process which wants to receive the message exactly once
My claim is not that you can provide exactly-once delivery unconditionally. My claim is that if you can provide at-least-once delivery then you can turn that into exactly-once delivery. The word "delivery" is not rigorously defined, but IMO any reasonable definition necessarily entails a certain level of reliability at the receiving node.
I agree with your claim, a recipient can cope with at-least-once delivery by being idempotent. You're right.
The meaningful distinction is that something on the recipient needs to be idempotent because the message might get received twice. The application can be oblivious to this, so long as you assume that channel to be perfect.
People on the Internet won't like you calling it 'exactly once delivery' because it's not exactly once - it's an idempotent at-least-once. Which is great! But the statement of the at-least/at-most problem is making a decision to re-try. There's no middle ground, I either have to retry or not. People won't like a claim that 'exactly once' delivery is possible, because it isn't, it's just moving the at-least-once-ness to somewhere else.
> People on the Internet won't like you calling it 'exactly once delivery' because it's not exactly once
That depends entirely on what you mean by "it".
Messages can get lost. So if you want guaranteed delivery, you sometimes have to re-transmit the same message, and so you might end up with the same message being delivered more than once. But it is trivial to put an abstraction layer on top of that to discard the duplicates and make it appear to the application as if every message is received exactly once.
So this hypothetical middleman will receive a packet, check that it's not a duplicate, and forward it to the recipient it's proxying for. How will it know that the recipient has actually received it? If the receiver doesn't ack the message in some way, which causes your abstraction to re-transmit the message again, then it exhibits 'at least once' behavior. If it the abstraction only ever forwards the message along once and doesn't care whether the recipient acknowledged it or not, then it exhibits 'at most once' behavior.
As a more concise answer - 'exactly once' delivery is impossible because you can't know if the recipient actually got the message. If you assume a perfect communication channel, then I agree the problem is trivial, but I challenge you to find such a channel! Even on the same machine, interprocess communication can fail in all sorts of fun ways.