Maybe you are on the losing side of a split, and there is a set of nodes out there that can make quorum.
Or maybe 51% of the nodes crashed and what you see is all that is left of the cluster.
Whatever you decide, it has to work for both those possibilities.
So are you interpreting Brewer as saying, in practice we never have a split. Just assume what you see is all there is of the network. However, Spanner is a CP system. If you are willing to assume you will never need to merge inconsistent data, wouldn't you go for AP?
What he's saying is that it's strictly CP, as it will handle partitions (there's a section further down the paper describing how it would be handled). But as P's hardly ever ever happen, because Google, then it's pretty much CA (always consistent and available).
So yes, for all intents and purposes, he's saying "Yeah, CAP does apply but we're so good we can make P 'go away'."
Note that in both CockroachDB and Spanner a cluster contains many independent and overlapping replica sets. The data is broken down into "ranges" (to use the terminology of CockroachDB; Spanner calls them "spans"), each of which has its own replica set (typically containing 3 or 5 members).