The main confusion I see is that at-least-once delivery on a 99% reliable channel can be erroneusly called exactly once delivery 99 out of 100 times. It could be reasonable depending on the scale of your communication exercise, but if you have hundreds of thousands of receivers then asumption will surely start to fail.
The kind of reliability I'm talking about where "100% reliable" is a reasonable assumption is inside the boundaries of a single server, i.e. a CPU, memory bus, and I/O peripherals including a NIC and non-volatile storage. Yes, things can and do fail inside that boundary, but if you start taking that into account things get much more complicated. You are also no longer in the realm of distributed systems at that point, which is the context in which the controversy about exactly-once delivery arises.