Moore's law doesn't apply to RPMs of spinning disks.
Agreed, if you include fast interconects like SAS and exclude the network requirement of SANs.
sharding/distribution across multiple nodes another.
I disagree, for the sam reason that doing so with iSCSI over ethernet isn't: too much added latency.
Infiniband may help, but I have yet to try it empirically.
 Switching/routing, multiple initiators, distances longer than a few dozen meters.
This is only significant if one is limited to a trivial number of spinning disks. 20 years ago, with separate disk controllers, this was the case.
If you run some benchmarks, I expect you'll find that, for random I/O, N disks perform better than N times one of those disks.
SCSI provided (arguably) an order of magnitude for number of disks per system.
Now, SAS provides another. $8k will buy 100 disks (and enclosures, expanders, etc). How many IOPS is that?
ETA: The Fujitsu Eagle (my archetype of 20ish years ago disk technology) had, IIRC, an average access time of 28ms. If its sequential transfer rate was one 60-100th of modern disks, what fraction of a modern disk's 4k IOPS could it do?
PL/SQL, though, with global data reach and advanced locking states for every single transaction, make it really hard to move off of a single host. So it's more and more work to get more disks attached to that host, and CPU is a hard upper limit.