It wasn't an analogy but it can be made as rigorous as you like.
Let S be the set of (addressable) storage locations, let d(x, y) = x - y which is the distance between storage locations (an integer). The triangle inequality holds if d(x, y) + d(y, z) >= d(x, z). d is also clearly symmetric and reflexive; thus S and d form a metric space.
Suppose module A contains a pointer to storage location x.
Suppose module B contains a pointer to storage location y.
Suppose A and B each contain a function which reads from z and writes to x or y respectively.
If, with respect to A, d(x, z) = n < infinity, then, using pointer arithmetic, A can write to z and thus write to y. The distance between A and B is bounded (effectively it's n).
On the other hand, if, with respect to A, d(x, z) = infinity (the value infinity might mean d threw an exception or something), then the distance between A and B is effectively infinite.
So the concurrency/parallelism dichotomy boils down to this:
Programs in which such distance between modules is finite are concurrent, programs in which the distance between modules is infinite are parallel.
This example only applies to memory but it can easily apply to other kinds of relevant spaces as well.