Hacker News new | past | comments | ask | show | jobs | submit login
PostgreSQL 9.2 RC1 released (postgresql.org)
80 points by pythonist on Aug 28, 2012 | hide | past | favorite | 20 comments



The news release doesn't tell me about or link to what's-new. That's here:

http://wiki.postgresql.org/wiki/Whats_new_in_PostgreSQL_9.2



I always love reading the wiki versions of the "What's New" because they actually show me how I can benefit and how to use the new feature.

If only I could switch $work DB from mysql to Pg...


"A dump/restore using pg_dump, or use of pg_upgrade, is required for those wishing to migrate data from any previous release." Heh.


Isn’t this standard procedure for any relational database?


These are some of the usual options, but one hopefully one day there will be logical replication so one can do it incrementally and on-line.


My vague understanding of pg_upgrade is that it just updates the metadata tables; any data changes that need to be made (which is very rare) are usually then done online and incrementally during updates... it should be a very fast operation.


That's correct, but it's not really incremental. It's all or nothing at the time you run it, even though it is a relatively small downtime because it twiddles catalog (corner case: statistics need to be regathered, of which we have a gross hack in place to copy to the best of our ability from the previous version).

All in all, logical replication will have to happen for a number of reasons:

* More OS independence

* Architecture independence

* Cross-version

* Intentional schema drift

* Alternative indexing strategies on a replica

* Avoiding both query cancellation and bloating hot-standby feedback

What Postgres has now is a more physical type of replication, based on the recovery logs, which think in terms of pages and btree/heap inserts et al rather than in terms of tuples and so forth. The main advantage of this approach is that it is very exact (to the point of being inflexible) and quite fast, and can be made even faster.

I also think logical replication will allow for a much better backup tool than pg_dump and better composition with other systems. Also, easier experimentation with distributed databases.


A lot of treats in this release! Amongst the new big features (cascaded replication, index only scans) I'm really happy to see a row->json function (great for message queues), DROP INDEX CONCURRENTLY (more flexibility to play with indexes even on production servers), NOT VALID CHECK constraints allowing us to slowly roll out more integrity, and a few other things. Great work as always!


As a Rails user the best feature for me is the improvement of pg_stat_statements to normalize queries: https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_...

That means I get a real overview on what Rails is actually doing and what takes the longest.


Possibly the biggest news in 9.2: index-only scans. That's going to be huge. Cascading replication is pretty awesome, too.


Try the feature matrix which includes 9.2: http://www.postgresql.org/about/featurematrix/


For those wondering what SP-GiST is: it allows the creation of new kinds of specialized non-btree indexes. 9.2 comes with spgist suffix trees over text and spgist quad trees over points.


I like the explanation from the release notes: "SP-GiST is comparable to GiST in flexibility, but supports unbalanced partitioned search structures rather than balanced trees. For suitable problems, SP-GiST can be faster than GiST in both index build time and search time."

Basically operations in SP-GiST are generally faster than in GiST, but due to SP-GiST not being balanced it does not fit all problems.

I would guess SP-GiST is really good at geodata like in PostGIS, since coordinates are relatively evenly distributed in space.


I just wanted to make the connection to suffix trees: http://en.wikipedia.org/wiki/Suffix_tree

Because those can be better than btrees when it comes to text.



Somewhat, but it's pretty limited in that it only verifies that the contents are valid JSON. You can't query against it with the built-in tools.


In a way, it's symbolic: it's saying that postgres is treating JSON and the developers that use it as first-class citizens.

Regardless, it will quickly and steadily improve.


It also provides a stable, preassigned OID. Because extensions don't really handle dependencies yet, this is critical -- making the extension "foo" that depends on the extension "json" is still rather far away.

That means projects like plv8 can more easily handle type conversions and so on. I think this is the largest single gain, and I think it rather considerable.


You can write simple perl/python plugins to create JSONPath index functions against JSON content.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: