My vague understanding of pg_upgrade is that it just updates the metadata tables; any data changes that need to be made (which is very rare) are usually then done online and incrementally during updates... it should be a very fast operation.
That's correct, but it's not really incremental. It's all or nothing at the time you run it, even though it is a relatively small downtime because it twiddles catalog (corner case: statistics need to be regathered, of which we have a gross hack in place to copy to the best of our ability from the previous version).
All in all, logical replication will have to happen for a number of reasons:
* More OS independence
* Architecture independence
* Cross-version
* Intentional schema drift
* Alternative indexing strategies on a replica
* Avoiding both query cancellation and bloating hot-standby feedback
What Postgres has now is a more physical type of replication, based on the recovery logs, which think in terms of pages and btree/heap inserts et al rather than in terms of tuples and so forth. The main advantage of this approach is that it is very exact (to the point of being inflexible) and quite fast, and can be made even faster.
I also think logical replication will allow for a much better backup tool than pg_dump and better composition with other systems. Also, easier experimentation with distributed databases.
A lot of treats in this release! Amongst the new big features (cascaded replication, index only scans) I'm really happy to see a row->json function (great for message queues), DROP INDEX CONCURRENTLY (more flexibility to play with indexes even on production servers), NOT VALID CHECK constraints allowing us to slowly roll out more integrity, and a few other things. Great work as always!
For those wondering what SP-GiST is: it allows the creation of new kinds of specialized non-btree indexes. 9.2 comes with spgist suffix trees over text and spgist quad trees over points.
I like the explanation from the release notes: "SP-GiST is comparable to GiST in flexibility, but supports unbalanced partitioned search structures rather than balanced trees. For suitable problems, SP-GiST can be faster than GiST in both index build time and search time."
Basically operations in SP-GiST are generally faster than in GiST, but due to SP-GiST not being balanced it does not fit all problems.
I would guess SP-GiST is really good at geodata like in PostGIS, since coordinates are relatively evenly distributed in space.
It also provides a stable, preassigned OID. Because extensions don't really handle dependencies yet, this is critical -- making the extension "foo" that depends on the extension "json" is still rather far away.
That means projects like plv8 can more easily handle type conversions and so on. I think this is the largest single gain, and I think it rather considerable.
http://wiki.postgresql.org/wiki/Whats_new_in_PostgreSQL_9.2