Unfortunately the author does not say some pretty basic things - which PostgreSQL version, how much data, how much of it fits into RAM, what storage (and hardware in general) ...
If I understand it correctly, PostgreSQL was using the default configuration. Which is rather inefficient, and is more about "must start everywhere".
Decreasing random_page_cost makes sense if you have storage that can handle random I/O well (although I wouldn't go to 1 even if it's an SSD). But who knows if the data was read from storage at all? Maybe it'd fit into RAM (and just increasing effective_cache_size would be enough for the planner to realize that).
Setting random_page_cost = 1 is pretty common advice for SSD and works well in my experience. Typical advice for HDD RAID or SCSI is random_page_cost = 2 and SSDs are faster than them.
I don't know who recommends random_page_cost=1, but IMNSHO it's a bit silly. Even SSDs handle sequential I/O better than random I/O. Values between 1.5 and 2.0 are more appropriate. I wouldn't really recommend 1.0 except when you know the data fits into RAM. There are other options that affect costs of random I/O, e.g. effective_cache_size.
If I understand it correctly, PostgreSQL was using the default configuration. Which is rather inefficient, and is more about "must start everywhere".
Decreasing random_page_cost makes sense if you have storage that can handle random I/O well (although I wouldn't go to 1 even if it's an SSD). But who knows if the data was read from storage at all? Maybe it'd fit into RAM (and just increasing effective_cache_size would be enough for the planner to realize that).