The point is to use data that was recently fetched into the (faster) cache memory as much as possible instead of incuring the penalty of a cache miss
Cache performance really depends on memory access patterns.
Anyways :) I'm being pedantic here, I should probably go back to work.
Yes, random access (missing the cache every time) may be 100x slower than sequential (hitting the cache almost always), but if you're iterating through an array twice as large, it will still be 100x slower.
I don't know about you, but I don't sort 100k records in a single batch, and if I am it's because I messed up. But I might sort different batches of 100 records 1000 times a minute.