No, dropped packets are partitions. They really are. A partitionable network is modelled as one which may fail to deliver any subset of sent messages between nodes. The Gilbert and Lynch paper makes this explicit.
The consistency guarantee requires that RW histories are compatible with some sequentially consistent history on a non-concurrent RW register. Defining a total order on operations is sufficient, I believe, but not necessary (does it matter what order two consecutive reads happened in?).
How do you explain Paxos, then? How does a dropped packet prevent the system from responding to queries? How about if I broadcast every response 10 times to everyone I know? How many packets must be dropped for the system to be considered unavailable?
Paxos is, fundamentally, a quorum-based system that deals with reordering of messages. It sacrifices liveness for correctness - if the proposer does not hear back from a majority of nodes (in the case of, e.g. a partition), the protocol will not complete (availability is sacrificed).
My point is not that there is a 'vital packet' in every protocol, the omission of which will cause either a lack of availability or consistency (although I can certainly design protocols that way!) - it's that for every protocol there is a network partition which causes it to be either unavailable or inconsistent. That network partition might be dropping ten messages, or just one. Retransmitting would make sense, but in real life message failures are often highly temporally correlated :(
The proof of this, by the way, is in a very famous paper by Fischer, Lynch and Patterson called "The Impossibility of Distributed Consensus With One Faulty Process". One take away is that one slow-running process can take down any protocol. It may take a few missed messages, but only a single node...
Actually, according to your own blog post on the subject, Gilbert and Lynch define a partition tolerance as:
“The network will be allowed to lose arbitrarily many messages sent from one node to another”
There's a huge world of difference between the network losing arbitrarily many messages, and the definition you use elsewhere in this thread: namely, any subset of packets dropped, no matter how small, counts as a network partition.
No, arbitrarily many doesn't automatically mean a huge amount :) This definition covers permanent partitions, but also encompasses temporary partitions which are effectively one dropped message or more. There exist protocols which will be broken by the loss of a single message. Paxos may not be one of them, but there is also a pattern of loss which will break that as well.
The theory behind all this really does hold this point up. I have another blog post with much more detail on the theory here: http://the-paper-trail.org/blog/?p=49, but I warn you it may be heavy going.