
Postgresql 9.3 Released - ergo14
http://www.postgresql.org/about/news/1481/
======
masklinn
I'm pretty sure I've posted that in every pg 9.3 thread to date, but for
application developers don't miss all the new diagnostic fields on
PGResult[0]: you can now get (when applicable on the error) the relevant
schema, table, column, column type and constraint[1] in order to translate
error information back into your application domain and generate logging and
error messages which make sense to developers & end users.

It's already available in psycopg2 (since 2.5)[2] and ruby-pg (in 0.16)[3], it
may be available in other adapters (outside of libpq obviously).

[0] [http://www.postgresql.org/docs/9.3/static/libpq-
exec.html#LI...](http://www.postgresql.org/docs/9.3/static/libpq-
exec.html#LIBPQ-PQRESULTERRORFIELD)

[1] without having to perform string munging on localized error messages,
which probably doesn't include half of them

[2]
[http://psycopg.lighthouseapp.com/projects/62710/tickets/149](http://psycopg.lighthouseapp.com/projects/62710/tickets/149)

[3] [https://bitbucket.org/ged/ruby-pg/issue/161/add-support-
for-...](https://bitbucket.org/ged/ruby-pg/issue/161/add-support-for-pg_diag_-
_name-error)

~~~
dspillett
_> error messages which make sense to developers & end users._

Giving this sort of detail direct to users is generally a bad idea for two
reasons:

1\. It can scare the living bejesus out of some of them.

2\. You are handing out knowledge of your applications inner workings. If you
have an unfound injection route a malicious entity might be able to use of
your error messages to either find the injection flaw and/or find things to do
with it once found.

This sort of detail should be logged of course, but give the users a simple
code to report to you when reporting the problem ("please quote issue XY0009
when contacting support about this issue") and record the detail against that
for your reference.

~~~
sspiff
Depends on who you're targeting (developers know how to make sense of types
etc) and how well you translate the error info to human-readable error
messages.

More information should allow you to produce better reporting. It's also not
necessary to present all the information to your users, you can also make
better error logs that users can send to the developers.

~~~
dspillett
_> Depends on who you're targeting (developers_

By all means have a developers mode which does hand out the information more
readily (though I tend to be wary of that in case it gets left on in
production, and because it is an extra code path that needs to be properly
tested), or report the detail directly if it is an internal tool. But the post
I was referring to infored (to my mind) handing the detail out to end users
(and by "end users" I mean the untrustworthy mob that is the general public).

 _> It's also not necessary to present all the information to your users, you
can also make better error logs that users can send to the developers._

Hence my last para, where I suggest giving the user reference to the detail
without giving them the actual details (the code to give to you for looking up
the stored exception report). Giving all the info to the user as an encrypted
package would work too (in that case "please quote this code in any issue
reports" becomes "please copy+paste this block of text into any problem
report").

------
JoelJacobson
This one is my favorite:

"Prevent non-key-field row updates from blocking foreign key checks (Álvaro
Herrera, Noah Misch, Andres Freund, Alexander Shulgin, Marti Raudsepp,
Alexander Shulgin)

This change improves concurrency and reduces the probability of deadlocks when
updating tables involved in a foreign-key constraint. UPDATEs that do not
change any columns referenced in a foreign key now take the new NO KEY UPDATE
lock mode on the row, while foreign key checks use the new KEY SHARE lock
mode, which does not conflict with NO KEY UPDATE. So there is no blocking
unless a foreign-key column is changed."

~~~
slurry
ALEXANDER SHULGIN!!!!!?????

[http://en.m.wikipedia.org/wiki/Alexander_Shulgin](http://en.m.wikipedia.org/wiki/Alexander_Shulgin)

~~~
scott_weber
(FirstName, LastName) is probably not a very good UNIQUE KEY.

------
pilif
Another year, another awesome release.

Thank you so much, PostgreSQL team!

As always, there's one feature that makes me want to upgrade immediately. This
time it's the NO KEY UPDATE lock mode which will greatly improve the
performance of our main application during its lengthy importer runs.

Using pg_upgrade, switching major releases has become something comparably
simple to do over the last years, though after last years issues in 9.2.0
(some IN() queries didn't return the correct results), I'm inclined to wait
for 9.3.1 this time around.

~~~
craigkerstiens
Fully agreed, a very awesome release and the list of features is in no short
supply with this version. Of note:

    
    
      * The improvements to foreign data wrappers to allow them to be writeable along with the Postgres FDW will improve visibility for the functionality
      * Lateral joins 
      * Materialized views
      * The checksums for checking against corruption
    

We've elaborated some of our favorites over at Heroku Postgres -
[https://postgres.heroku.com/blog/past/2013/9/9/postgres_93_n...](https://postgres.heroku.com/blog/past/2013/9/9/postgres_93_now_available/)
along with how you can provision a 9.3 on us to begin using it right away, and
then of course you can always see the full whats new on the wiki
[http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.3](http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.3)

------
jeltz
Some excellent new features in 9.3. My two favorite features are 1) not having
to configure sysv shared memory anymore removing an installation step and 2)
LATERAL JOIN which makes it easy to join with a set returning function (the
subquery usage seems less useful in practice). See What's New for examples on
how LATERAL can be used.

~~~
ozgune
Yes, it's interesting that they haven't mentioned removing the need to
configure SysV shared memory. In my experience, a good number of developers
initially miss out on this config / kernel setting, and wonder about their
performance issues.

This feature will notably increase usability for people who are just getting
started on Postgres. Now, I just wish they up the default config value from
32MB to something that's more in line with today's systems.

~~~
deafbybeheading
The default is determined by initdb, but was capped at 32MB due to the above
issue. With this change, the cap's been bumped to 128MB.

------
brissmyr
Love the way they are heading with the JSON stuff
[http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-...](http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-
highlight-json-operators/)

~~~
einhverfr
I expect to get a lot of use out of the ability to convert JSON into row
types.

------
dataminer
List of new features in 9.3

[https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_...](https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.3)

~~~
drob
Cute: "Incidentally, this also makes it possible to set up 'cycles' where
replication is going in a circle. Whether that's a feature or a bug depends on
your perspective."

Is there any context in which it would make sense to set up circular
replication? This would necessitate that all of your nodes be read-only, so
I'm not sure what there would be to replicate.

In any case, a bunch of these new features are hot. Particularly excited about
fast failover and custom background workers.

~~~
McGlockenshire
> Is there any context in which it would make sense to set up circular
> replication?

It depends on if each node in the loop knows that it's already seen the
replication data before and thus logs and ignores it. That's basically how
MySQL (multi-master) replication loops work.

~~~
drob
But there's no value in closing the loop. If postgres circular replication
works this way, then a -> b -> a works the same way as a -> b, so there's no
new capability here.

------
buckbova
I was excited to see this:

"Automatically updatable VIEWs"

But then read this:

[https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_...](https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.3#Updatable_Views)

"Simple views can now be updated in the same way as regular tables. The view
can only reference one table (or another updatable view) and must not contain
more complex operators, join types etc."

As primarity a database developer, to me this is useless. Not sure why I'd
want a view for only one table.

For permissions there has been column level perms for some time, which is more
efficient than multiple views.

~~~
hvs
This is pretty standard (it is how updatable views work in MSSQL). The
advantage they provide is if you wanted to provide a limited view into a table
(say 4 of 20 columns) but still allow updates. Granted, that's a pretty small
use case, but updating a view that spanned multiple tables could easily become
a monster of a problem to solve generally.

~~~
jpitz
>it is how updatable views work in MSSQL

Not exactly. The updated columns may only unambiguously reference one table,
but you can execute an update on a view which references multiple tables.

------
agilord
I am a big fan of postgresql, but I am always having issues with the cluster
configuration part, e.g. failovers, automatic new master election and all
these crazy administration about the wal shipping. Was there any improvement
on this either in the code base or by a 3rd party? What do people use to
handle larger postgresql clusters?

As a contrast, I really like Riak's ability to just work with one node down...

~~~
pilif
The one big improvement happening in 9.3 is that synchronizing with a new
master has become much easier. In earlier releases you had to more or less
either use file based wal archive logging or you would have to rsync your
whole data directory over, now the built-in replication protocol can handle
this for you. We also got pg_standby now which is all you need to set up a new
slave.

But yeah - if you need automatic failover and master election, you still need
third-party tools. Some have had success with pgPool as a out-of-the-box
solution (I haven't. I had severe reliability issues with pgPool. You might be
more lucky), others produce their own scripts.

The process isn't complicated, it just requires you reading a lot of manpages
and thinking ahead, but once you got the process down, postgres itself is
reliable enough that its (admittedly limited) tools just work (which is a very
good thing).

As long as Postgres doesn't do master-master replication, failover will always
be a complicated topic to deal with.

~~~
jeltz
In 9.3 they also made failover faster.

------
Cieplak
I'll still go with MangoDB any day. Its autosharting is unparalleled.
[https://github.com/dcramer/mangodb](https://github.com/dcramer/mangodb)

~~~
juzfoo
Interesting! never heard of this before, even google seem to bring up mongodb
answers as default. Have you used this in production? Care to share your
experiences?

~~~
xionon
It's a joke, it just pipes everything to /dev/null

------
lifeisstillgood
This places the decision to move to Mongo or redis in serious doubt - it's an
impressive competitive attack.

~~~
sehrope
Mongo - Sure. I'm hardpressed to find genuine reasons to use MongoDB these
days. Postgres provides everything I need in a persistent data store and has a
ton of other stuff I didn't know I'd need (till you do!). Mongo was great to
use with something like node.js where the async/schema-less model translates
perfectly but using Postgres with the async driver[1] isn't bad either. Add in
hstore, JSON support, plV8 ... and what exactly would somebody use Mongo for?
:D

Redis - No clue what you're saying here. If you're using Redis then you're
keeping your data in memory and dealing with low level structures. It's for
completely different use cases. A classic example is rate limiting for an API.
Doing it in Postgres with a disk I/O per request would cripple any semi-
popular API. The data doesn't need to be _exactly_ persistent (if we miss a
few updates due to the server crashing we don't really care). In exchange for
that it's blazing fast for individual writes that we can batch together to
backup to something like Postgres (or Redis's built in persistence like AOF).

[1]: [https://github.com/brianc/node-postgres](https://github.com/brianc/node-
postgres)

~~~
ruricolist
Postgres has "unlogged tables" for the not-exactly-persistent use case.

~~~
saurik
Additionally (as unclogged tables will cause total data loss on failure as
opposed to just missing some updates) you can alternatively tune, per
transaction, the durability of your changes: PostgreSQL lets you tune the
durability of individual transactions using the synchronous_commit variable,
letting you go entirely asynchronous (where you toss data at the database, it
returns immediately, and will be saved to disk during the next ganged commit).

------
goes_to_11
Are there any PostgreSQL administration tools like HeidiSQL (which is for
MySQL)[1]? Seriously, this piece of software is so feature-rich and polished
and fun to work with, it makes me choose MySQL over PostgreSQL if the MySQL
features fit my needs. Indeed, I know pgAdmin, but some features are rather
clunky to use and it doesn't really look that actively developed.

[1]: [http://www.heidisql.com/](http://www.heidisql.com/)

~~~
johnyzee
I am building a (free) web interface for Postgres that aims to be fun and easy
to use: [http://www.teampostgresql.com](http://www.teampostgresql.com)

It has a lot of user friendly features particularly for data viewing and
navigation, such as clicking through foreign key references to referenced
rows, and the other way, looking up rows that reference this one, plus a lot
of other stuff that I missed from other admin packages.

There's a demo here:
[http://teampostgresql.herokuapp.com/](http://teampostgresql.herokuapp.com/)

Happy to answer any questions.

~~~
gdulli
It looks promising. I wanted to try it out but I got this error when it was
reading the metadata on my database:

"Encountered more 0 entries in index columns list than there were expressions
in 'indexprs' expressions list for index
'activity_emd5_00_expr_activity_type_expr1_idx'. Current count = '1', parsed
expressions = 1, 'indexprs' value='({OPEXPR :opno 3963 :opfuncid 3948
:opresulttype 25 :opretset false :opcollid 100 :inputcollid 100 :args ({VAR
:varno 1 :varattno 6 :vartype 114 :vartypmod -1 :varcollid 0 :varlevelsup 0
:varnoold 1 :varoattno 6 :location 43} {CONST :consttype 25 :consttypmod -1
:constcollid 100 :constlen -1 :constbyval false :constisnull false :location
60 :constvalue 12 [ 48 0 0 0 118 101 114 116 105 99 97 108 ]}) :location 57}
{OPEXPR :opno 3963 :opfuncid 3948 :opresulttype 25 :opretset false :opcollid
100 :inputcollid 100 :args ({VAR :varno 1 :varattno 5 :vartype 114 :vartypmod
-1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 5 :location 89} {CONST
:consttype 25 :consttypmod -1 :constcollid 100 :constlen -1 :constbyval false
:constisnull false :location 108 :constvalue 6 [ 24 0 0 0 105 100 ]})
:location 105})'"

For your reference, here's that index:

"activity_emd5_00_expr_activity_type_expr1_idx" UNIQUE, btree ((source_details
->> 'vertical'::text), activity_type, (activity_details ->> 'id'::text))

~~~
johnyzee
Thank you, I will look into this. What is your PG version if I may ask?

~~~
gdulli
9.3beta2

~~~
johnyzee
Hey, you might not get this but the issue you reported is now fixed (failure
to resolve segmented index expressions after some changes to PG system
functions). Thanks for reporting this and let me know if I can help with
anything.

------
ghostdiver
Foreign key deadlock fix finally after 5 years is out to production! Hopefuly
now it is safe to use foreign keys in postgresql :)

Haveing table A, B, C with FK in A and B to C_column1, running function which
updates A and other function which updates B, both refering to same C row via
FK, but not modyfing it in any way - causes deadlock.

in pre 9.3, now its fixed, so I can use FKs again after so many years and its
just creazy:)

------
zwily
9.3 is also laying some of the groundwork necessary for bi-directional
replication in the future (hopefully 9.4):
[http://wiki.postgresql.org/wiki/BDR_User_Guide](http://wiki.postgresql.org/wiki/BDR_User_Guide)

That means native multi-master PG replication is on its way. Woot.

~~~
ghostdiver
the idea of geograpically distributed database is useless from my POV

~~~
jimktrains2
Why? Some background one your thoughts would be helpful.

It (probably) won't be ACID compliant, but it'll be extremely useful for most
use-cases.

~~~
ghostdiver
Sorry:)

Not planning to have database distributed all over the globe and even if I had
to distribute the data I would do this on application level with tools like
RabbitMQ.

~~~
jimktrains2
But once you write to Rabbit, you don't know that it'll make it to _any_ db.
At least with MM replication you know it's somewhere, and short of disk
corruption, you should be able to retrieve it?

~~~
ghostdiver
I have 8 years experience with Postgres and I already know that a lot of cool
built-in features just do not work in production, under heavy load. Usually I
have to make some logic inside application in order to protect myself from
postgres-driven fuck up.

That's why I am sceptical about this new cool and ambitious built-in MM
replication.

~~~
einhverfr
Examples please? I am aware of hard corners in places, and rough spots, but
some clear discussion might be helpful.

------
polskibus
fantastic, hopefully this will enable further progress in GPU-based database
accelerators like
[http://wiki.postgresql.org/wiki/PGStrom](http://wiki.postgresql.org/wiki/PGStrom)

------
akurilin
Love PG and the community around it. The IRC channel on #freenode has been an
invaluable resource with a lot of very patient folks in it answering very dumb
questions (mostly from me).

------
minaguib
I'm super-excited that this release includes the FOR KEY SHARE lock
granularity (used by foreign keys) - this is a big win for web app developers
- I've blogged about it here 3 years ago:
[http://mina.naguib.ca/blog/2010/11/22/postgresql-foreign-
key...](http://mina.naguib.ca/blog/2010/11/22/postgresql-foreign-key-
deadlocks.html)

------
16s
Postgres is the C++ of databases. It is well thought out and designed. It
seems overly complex at first until you actually use it a lot to solve hard
problems, then you understand why it is the way it is and just how great all
those features are.

~~~
e12e
Not quite sure what RDBMSs you are comparing Postgres to here... Firebird?
Microsoft SQL/Sybase? Oracle? Because none of those strike me as particularly
"simple" products?

~~~
corresation
Almost certainly MySQL, which is overwhelmingly the competitor in the psql
space. A big reason why mysql took off is the same reason that PHP took off --
it is very quick and easy to take the first steps.

~~~
nirvdrum
I think a large part of that simplicity can be attributed to the lack of
Windows port of PostgreSQL at the time that PHP was gaining traction. MySQL
was the only reasonable option then. And then the community mindset was
solidified.

~~~
dugmartin
I think it was the need for manual vacuuming. If you were running a small web
host at the time picking MySQL over Postgres meant a huge reduction in
potential support tickets related to non-vacuumed databases causing customers
to run out of space.

~~~
nirvdrum
That certainly didn't help, but if your DB was large enough where vacuuming
mattered, you probably found a cron solution. The other place I saw MySQL win
was in single access benchmarks that were scattered across the Web. Postgres
would win in concurrent scenarios pretty handily, but people were glued to
those serial benchmarks.

------
jfchevrette
The more I hear about Postgresql the more I'm intrigued. I'm a sysadmin by
profession and I manage quite a few MySQL servers. Are there any good
resources to learn Postgresql (both admin & usage) outside of the
documentation?

~~~
teddyh
The official documentation is really quite excellent.

------
juzfoo
What is the ORM/Driver that anyone here would suggest to go with
Express/Node.js while working with postgres?

Also, any libraries similar to ruby gem "apartment" that would make
multitenancy with postgres easier in node.js world?

~~~
netghost
For what it's worth, Node's ORM packages seemed fairly immature, either
lacking support for migrations, relations, just being awkward to work with, or
lacking some other functionality I had grown accustomed to.

Bookshelf ([http://bookshelfjs.org/](http://bookshelfjs.org/)) seemed quite
nice though, but by the time I got around to it I decided my project wasn't
very interesting anyways ;)

------
yeukhon
Just a few days ago I was talking to people about JSON support in 9.2 and how
I was excited about the upcoming 9.3 release with more built-in support for
JSON (I wasn't aware of the release date though). Here we are.

