
PostgreSQL 9.4 Released - petercooper
http://www.postgresql.org/about/news/1557/
======
pilif
Like every year before, the Postgres team has blessed us with an early
christmas present. And like every release post before, I'd like to use this
opportunity to say thanks to the team for the awesome job they are doing year
after year.

It's not just the database itself (and that's awesome on its own right), but
it's also all the peripheral stuff: The documentation is seriously amazing and
very complete, the tools that come with the database are really good too (like
psql which I still prefer to the various UIs out there).

Code-wise, I would recommend anybody to have a look at their git repo and the
way how they write commit-messages: They are a pleasure to read and really
explain what's going on. If everybody wrote commit messages like this, we'd be
in a much better place what code-archeology is concerned.

Patches from the community are always patiently reviewed and, contrary to many
other projects, even new contributors are not really required to have a thick
skin nor flame retardant suits. The only thing required is a lot of patience
as the level of quality required for a patch to go in is very, very high.

Finally, there's #postgresql on Freenode where core developers spend their
time patiently helping people in need of support. Some questions could be
solved by spending 30 seconds in the (as I said: excellent) manual and some of
them point to really obscure issues, but no matter what time it is: Somebody
in #postgresql is there to help you.

I think there's no other free software project out there that just gets
everything right: Very friendly community, awesome documentation, awesome
tools, and of course and awesome product offering to begin with.

Huge thanks to everybody involved.

Also: Huge YAY for jsonb - I have many, many things in mind I can use that for
and I have been looking forward to this for a year now.

~~~
gh02t
> we'd be in a much better place what code-archaeology is concerned.

This sounds like a great setup for a sci-fi novel. 500 years into the future,
the infrastructure their distant ancestors coded has begun to fail. Now Biff
Miffington, code-archaeologist, must sift through millions of forgotten
messages using a mysterious tool remembered only as "git." Its interface is
arcane and the remaining messages broken, tainted by the destructive influence
of Mountain Dew and Cheetos. Will he unravel the mystery that's causing Candy
Crush Saga MCCXXXI to kill its users?

Edit: On what OP actually said, I'd also like to say that Postgres is an
awesome product.

~~~
angersock
Testing the waters, I'd played with the idea of a story with basically this
setup:

Two new developers start at a company/startup, and are brought in to do a six-
month sprint to fix a stalled/broken product after the development team
becomes unavailable because _reasons_. So, they're dropped into a codebase and
are trying to pull everything together.

However, as they work through into deeper and deeper parts of the system, the
commit messages and comments get more and more cryptic and unsettling, hinting
at the reasons for the prior team's dissolution, the business forces that
caused that to happen, and maybe something worse/better going on outside.

I'm really lazy though. :(

~~~
phaemon
>after the development team becomes unavailable because reasons.

Every member of the programming team was attacked and killed by wolves in
unrelated incidents on the same night.

And they said it couldn't happen...

------
tracker1
Just want to say to the PostgreSQL and EnterpriseDB guys that it's always
great to see the progress on this. My hopes for 9.5/10 is that we will see
PLV8 and Replication baked into the actual release.

PLV8 is such a natural fit with the new JSON(B) types that it's probably going
to become the most used extension with that data type... And imho sorely
missing from the out of the box experience. I'm glad that they've concentrated
on getting the data structure and storage right first. Hopefully we'll see
this in vNext.

As to replication, I understand that this is part of EnterpriseDB's business
model, just the same not having the basic replication pieces baked in, is
still lacking compared to other databases. Even if the graphical tooling was
commercial only, and all the knob frobbing via config or command line is more
complex, having it in the box is a must imho. I actually really like how
MongoDB handles their replica sets, and where RethinkDB is going with this as
well. Though they aren't transactional SQL databases primarily, it's a must
have feature these days. Replication with automagic failover is a feature that
has gone past enterprise-only.

One last piece, would be if there were built in functions similar to the
String.prototype.normalize that was added to JavaScript... so that input
strings could be normalized easier for comparison/indexing, though PLV8
support could/would bring this readily.

All the same, thanks for all of your hard work, and I look forward to the
future of PostgreSQL.

~~~
internetisthesh
Yes, replication with auto failover is far away from enterprice-only today. If
you pay 5$/month you can get a SQL Database in Azure running on 3 nodes with
auto failover. One node is synchronous and the second one is async updated. I
wouldnt choose a database today unless setting up somerhing like that is
obvious and trivial.

------
gfodor
JSONB is getting a lot of attention (and deservedly so) but logical decoding
is much more exciting to me. Being able to capture postgres transactions to
put into a durable log (like Kafka) for downstream processing is a fundamental
tool needed to build a unified logging architecture. If you've worked with
hadoop you've probably tried to approximate this by hand by taking regular
snapshots of your database or something, but this is much, much more sane.
Really exciting. Great work postgres team!

------
jeltz
Looking forward to when we move to 9.4 so I can start using "sum(foo) FILTER
(WHERE bar)" instead of the ugly "sum(CASE WHEN bar THEN foo END)".

~~~
wvenable
Wow, that's fantastic. My queries are littered with that type of construct.
Unfortunately, we are using SQL Server and Microsoft hardly ever makes any
developer-friendly enhancements to T-SQL.

~~~
tracker1
True enough... I can't even begin to wrap my head around the XML wrangling
functions in T-SQL, I feel like a n00b copying JS methods for jQuery when I go
there. That said, it's probably one of the easiest RDBMS servers to
administrate for small-medium sized deployments.

------
systematical
Switched over to PostgreSQL for a personal project for the json datatype.
Great if you want some mongo-esque document storage without losing out on
having a relational database.

[http://clarkdave.net/2013/06/what-can-you-do-with-
postgresql...](http://clarkdave.net/2013/06/what-can-you-do-with-postgresql-
and-json/)

~~~
aidos
That info is for PostgreSQL 9.3 (and 9.2). 9.4 comes with other enhancements
if you're using json. jsonb in particular opens up whole new possibilities if
you're doing heavier work with json data.

------
radicalbyte
Next year I really need to switch from MSSQL to Postgres. The work that the
Postgres team have done in the last 2-3 years is really amazing.

They are also clearly reaping the benefits of some very smart architectural
decisions, and that gives me the confidence that they will be able to continue
innovating in the coming years.

~~~
g8oz
If only there were a way to run performant Postgres on Windows. Maybe it's
time learn Hyper-V so I can run it in a Linux VM.

~~~
jrapdx3
That's close to what I'm doing, except running FreeBSD 10.1 in Hyper-V. It's
actually pretty easy to set up since FBSD 10.1 is available for download as a
VHD. Just need to specify the VHD image file as the disk to attach when
creating a new HV VM.

It's somewhat more fun/challenging to configure network access for the VM.
Once that's done it's very handy for developing web servers/apps. With the web
server running in the VM, the client/browser on the host points to the VM just
like any remote site. PostgreSQL in the VM performs quite well.

IMO this kind of development is much more satisfactory in a unix environment
than under Windows and Hyper-V provides a convenient way to get there.

------
taspeotis
It looks like PostgreSQL is on track to slowly succeed MySQL as the de-facto
open source database.

Microsoft tentatively seems to be settling on them as the preferred RDBMS for
non-Windows platforms [1]:

> Within ASP.NET 5 our primary focus is on SQL Server, and then PostgreSQL to
> support the standard Mac/Linux environment.

I use EF+SQL Server and they're very much complementary and provide an
excellent developer experience. NHibernate+SQL Server is woeful unless you
want to use the loosely-typed Criteria stuff. NH's LINQ provider is terrible
and it gets confused at the drop of a hat (call Distinct and _then_ OrderBy?
"I'm sorry Dave, I'm afraid I can't do that"). At this point I'm convinced
only MS know how to write LINQ providers that won't fall over the moment you
try to do something useful with them.

Microsoft writing a LINQ provider for PgSql is a great thing for running .NET
code on non-Windows platforms.

[1]
[http://blogs.msdn.com/b/adonet/archive/2014/12/02/ef7-priori...](http://blogs.msdn.com/b/adonet/archive/2014/12/02/ef7-priorities-
focus-and-initial-release.aspx)

~~~
zapov
> At this point I'm convinced only MS know how to write LINQ providers that
> won't fall over the moment you try to do something useful with them.

I would argue Revenj + Postgres provide much better developer experience. But
as you said, it's not written by Microsoft so that attitude doesn't help it
out.

I'm 100% sure Microsoft can't write LINQ provider which actually understands
Postgres and can use it to the fullest (as Revenj can).

~~~
dodyg
The problem with Revenj is that it seems to tied to dsl-platform, which looks
like an online service. People would rather to have their compiler tools with
them. You don't want your code investments to go 'poof' should the online
service fails.

~~~
zapov
While that's a fair criticism (to have compiler offline), dsl-platform can be
licensed for an offline use.

It's not like Revenj needs dsl-platform, but rather that dsl-platform
integrates into Revenj.

So to make it blunt, would you try/use Revenj if it had part of dsl-platform
compilers available for offline use?

~~~
dodyg
It's a surprise to me that RevenJ can be used without dsl-platform because the
readme makes it look like it is dependent on dsl-platform.

Yes, i would try dsl-platform if it's available offline. Online compilers are
pretty much deal breakers for me.

------
squigs25
I'm really pumped about the update to GIN indexes, and the ability to
concurrently update materialized views. Both enhancements are huge for the
postgres ecosystem, and especially for productionizing postgres databases.

------
davidgerard
Postgres is about to be the new hotness. I mentioned to our hosting provider
that we were looking into moving our in-house Oracle and MySQL to Postgres
(off Oracle because it's expensive, off MySQL 'cos it's shit) and he said more
than a few customers were looking into this precise move.

We're just getting into PG now, and it's just _really nice_ to set up and use.
I really wish more web stuff properly supported PG and didn't pretty much
require MySQL.

------
mrmondo
Congratulations to the PostgreSQL team for continuously supporting and
improving what is in my mind the best all-round database server out there.

------
gamesbrainiac
EFF launches an iconic case and PG 9.4 on the same day? This is probably the
best day in the year and its not even christmas yet.

------
codeaken
Did PostgreSQL just kill MongoDB?

~~~
organsnyder
No.

~~~
bonif
You're right, lots of amateur-hour developers might still be using mongodb

~~~
organsnyder
Yep. MySQL isn't dead yet, either. :-)

I'm not familiar enough with MongoDB (or Postgres 9.4) to really answer the
original question. My guess is that Mongo will still be applicable to certain
use-cases, but—like you mention—the majority of users will be those who really
don't understand the technologies and their strengths/weaknesses.

------
davidw
There are a lot of things that have changed since my first programming job,
back in 1997. Things that I still use and love: Postgres, and Emacs.

------
mgkimsal
Is there an equivalent to the MySQL handlersocket stuff in postgresql?

[http://www.percona.com/doc/percona-
server/5.5/performance/ha...](http://www.percona.com/doc/percona-
server/5.5/performance/handlersocket.html)

[http://www.slideshare.net/akirahiguchi/handlersocket-2010062...](http://www.slideshare.net/akirahiguchi/handlersocket-20100629en-5698215)

It might go against the "no transaction" crowd, but seems useful for
performance-critical needs. I'm scheduling a bit of testing time with it next
week to see if it's something I'd roll out in production (Maria 10 system)

------
tiffanyh
I really wish someone would update these benchmarks to the latest releases of
Scientifc Linux, FreeBSD & Dragonfly

[http://www.dragonflybsd.org/performance/](http://www.dragonflybsd.org/performance/)

------
jvinet
With JSONB here, JSONPath starts to become very interesting...

[http://goessner.net/articles/JsonPath/](http://goessner.net/articles/JsonPath/)

[http://blog.redfin.com/devblog/2012/03/json_in_postgres.html](http://blog.redfin.com/devblog/2012/03/json_in_postgres.html)

That article was written before JSON/JSONB showed up, but the idea remains the
same.

I didn't have plv8 installed, so I did some plumbing code in plpython. plv8
would be more suitable though.

[https://github.com/jvinet/pg-jsonpath](https://github.com/jvinet/pg-jsonpath)

------
k_sze
Bye bye MongoDB.

~~~
digitalzombie
How do you cluster in postgresql? Serious question, my prefer noSQL is
cassandra and clustering is pretty easy.

I ask this question every year and postgresql have not deliver this. If there
is any, there are hardly any documentation on it.

~~~
norkakn
Clustering is really, really hard.

Cassandra is one of the better ones out there, but you have to deal with its
data model and weird consistency promises (which however weird you think they
are, are weirder)

The correct way to cluster also changes dramatically depending on your use
case. Sure there are things like RAC that promise to make it just work, but
those don't scale more than a few nodes.

Mongo is kind of the worst in this - it clusters in one weird way, has bad
tooling, and subtly destroys your data at scale.

The general philosophy with postgres is to do it right, or not do it. There
are ways to do specific kinds of clustering, but all of them (just like mongo,
oracle, etc) have a lot of nuances to them.

If you have a natural shard key, use a bunch of schemas and table inheritance,
and eat the downtime during re-shards. Check out citus as well. They have
their issues, but they can help you hook up what you need.

~~~
threeseed
> Cassandra is one of the better ones out there

I take it you haven't actually used Cassandra much in the last few years. It's
data model is almost identical to a typical relational one and it's
consistency promises are quite clear:

[http://www.datastax.com/documentation/cql/3.0/cql/aboutCQL.h...](http://www.datastax.com/documentation/cql/3.0/cql/aboutCQL.html)

And I've scaled Cassandra clusters from 1 to 100 nodes in hours with no
issues. It really is quite simple. Likewise have had no issues with MongoDB
replica sets. It is definitely not "really, really hard".

> postgres is to do it right, or not do it

What a pathetic cop out. PostgreSQL has been around for decades they've had
plenty of time to have a proven, stable solution implemented.

~~~
norkakn
Cassandra avoids some of the really visible issues by being AP instead of CP.
Hbase hits them, but dodges a bit by only having row level consistency. They
are solving very different problems.

The lack of vector clocks in Cassandra can lead to some very non-intuitive
(possible wrong) behavior - check out their counter implementation for some
rage on that. It's pretty well made though, and I think C*, Hbase and Postgres
all have great uses (along with Redis, and a lot of others)

Mongo tends to get things subtly wrong in ways that corrupt data, or that
don't scale, and it gives up both A and C.

------
atonse
Great news! I'd love to move over to this from MongoDB for a project that has
high uptime requirements. But while I think the JSON will really replace it,
does PG have a solution for High Availability (like replica sets) in the
works?

I'm newer to Postgres so am not sure. Replica Sets are the killer feature for
me, more so than just storing JSON documents. I'd appreciate if someone can
chime in. I've done some googling but there seem to be multiple strategies for
replication.

~~~
arthursilva
PG has async/sync/hybrid replication for years already. It's not as tooled as
MongoDB but there's some tools like
[http://www.repmgr.org/](http://www.repmgr.org/) to amend it.

~~~
internetisthesh
Every time I look into tools like this they are quite far behind for example
AlwaysOn in MSSQL. For example, with PG I have to reseed the original master
if it comes back online after an outage. It's as far as I can tell not fully
automatic and transparent to me as the guy responsible for managing it. With
SQL and elasticsearch+ZooKeeper for example nodes can go up and down without
anyone noticing it and me not having to do anything. Is this still a place
where PG is behind?

~~~
tensor
Yes and no. There are differences in the data guarantees that something like
elastic search and postgresql give. So it's not really appropriate to compare
those.

MS SQL does have easier tooling for replication. The setup for postrgresql is
complex and it doesn't come with out of the box tools to easily manage
failover and recovery as you mention. Progress is being made in making this
easier, but it's still mostly in the low level functionality:

[https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_...](https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.4#Replication_improvements)

[https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_...](https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.3#Replication_Improvements)

------
elchief
Full notes here:

[http://www.postgresql.org/docs/9.4/static/release-9-4.html](http://www.postgresql.org/docs/9.4/static/release-9-4.html)

My favourite parts:

Allow views to be automatically updated even if they contain some non-
updatable columns

Allow control over whether INSERTs and UPDATEs can add rows to an auto-
updatable view that would not appear in the view. This is controlled with the
new CREATE VIEW clause WITH CHECK OPTION.

Allow security barrier views to be automatically updatable

------
nonuby
Whats a good place to suggest a postgresql [json] improvement, my message was
intercepted when posting to pg-performance. A major one at the moment is that
offset does the select projection on discarded rows (common to use offset in
paging), under normal circumstances this isn't a problem, however when does a
json operation such as reading a field ->> this causes major performance
degrading. Note of course only immutable functions can this optimization
apply. There are several workarounds but if PostgreSQL wants to win back some
nosql heads it should be straight forward.

In addition to update a json field isn't straight forward, these operations
should be supported by first class inbuilt functions.

Its getting close but its not quite a nosql killer yet if they are targetting
people who didn't originally come from rdbms background..

------
netcraft
How long till someone creates a library with the mongo api?

~~~
remeh
Actually ...
[https://github.com/umitanuki/mongres](https://github.com/umitanuki/mongres)

------
petercooper
Are there any good books coming out that cover Postgres 9.4? I know the docs
are okay but I want something with more of a narrative structure as my history
with Postgres is spotty. The only one I've found so far is O'Reilly's
"PostgreSQL: Up and Running, 2nd Edition" coming out this month but would
prefer a personal rec.

~~~
VieElm
I have O'Reilly's "PostgreSQL: Up and Running, 2nd Edition" (You can buy the
ebook already) and it's mostly about setting it up and administering postgres
and a look at the tools around postgres. It's not a book that goes in depth on
actually on how to write SQL for postgres, although it has a secton on this,
or develop applications with postgres. It's more of an ops book. Anyway, I
like it because coming from MySQL I was unfamiliar with how to set up and use
postgres and this book set me straight.

~~~
petercooper
Thanks to Safari, I'm now several chapters into the book and it's perfect for
me - thanks :) It was the ops side of things that was the biggest sticking
point for me, although it seems like it'll dig into the JSONB stuff later on
too.

------
steventhedev
Quick question regarding JSONB:

Is attribute order stable? Obviously, order is not preserved, but if the order
changes on subsequent accesses, this causes problems if you ever serve content
directly from a jsonb field without sorting the attributes manually.

~~~
azdle
Why would this cause problems? Order is irrelevant in JSON. "An object is an
unordered set of name/value pairs."[0] From a there is absolutely no
difference between {"name": "Patrick", "age": 24} and {"age": 24, "name":
"Patrick"}.

[0] [http://json.org/](http://json.org/)

EDIT: Well, now four of us have replied at the same time saying the same
thing, I'd say this topic is now well covered.

~~~
skybrian
Sometimes it matters whether serialization to a string is a deterministic
function. If the JSON is the same, you want the output to be the same. A JSON
parser won't care, but it's useful to do a string comparison without parsing
(for example when diffing output).

~~~
lsaferite
But that's not really a correct thing to expect is it? If an object is an
unordered set of name/value pairs then multiple serialized versions of the
object may indeed be the same object data.

~~~
skybrian
It's not something you can assume, but sorting the keys is sometimes a
desirable feature in a JSON serializer. (Just like pretty-printing is.)

~~~
tomiko_nakamura
AFAIK the default serializer does not guarantee the order to be stable, but I
haven't participated in the development of this feature that closely. So maybe
I'm wrong.

Anyway, it should be possible to write a simple serializer on your own - a
trivial PL/V8 function should suffice, I guess. And then you can define a CAST
using that function (but generally adding casts is a bit dangerous, as it may
have unexpected consequences).

------
rpedela
I am pretty happy about the addition of ALTER SYSTEM. I haven't tried it yet,
but I think it will make automatic failover to a standby easier to implement.
Does anyone have experience with this?

------
aswanson
Even though rails hides it from me, I honestly don't mind directly working
with the SQL interface on this db, and it's language interfaces are awesome.
Thanks, team.

------
apetresc
So when can we expect AWS RDS to support it? :)

~~~
bkeroack
Not trying to be mean but, why wait? If you're stuck on AWS it's not
impossible (though difficult and annoying certainly) to get a decently-
performant Postgres instance going. The experience setting it up will lessen
the vendor-lock that Amazon has on you (convenience always has a price).

~~~
dilap
Outside observer: Man, I figure setting up a "decently performing" Postgres
instance on AWS would be such a common thing to do that it would either be
known to be impossible, or have cookie-cutter instructions, if not a script.
How is it that it's "difficult and annoying" still?

Not trolling, genuinely curious :)

~~~
mbell
It isn't "difficult and annoying" anymore, it used to be before SSD backed EBS
and/or provisioned IOPS because you had to RAID0 together a dozen or so
magnetic EBS volumes to get decent disk performance and then deal with the
annoyance of sorting out a way to take consistent snapshots of the RAID array
for backups.

Now you can just toss a single 1TB SSD backed EBS volume on an instance and
get ~3k iops, or use provisioned iops to get almost any performance level you
need.

~~~
joevandyk
That performs decently now?

I got scared off using EBS a few years ago and use only the ephemeral storage
+ failovers.

~~~
bkeroack
Regardless of whether they're backed by SSD, all EBS volumes on an instance
sit behind a 1 Gbps pipe (except for the more exotic and expensive instance
types). That's part of the reason why Amazon talks about IOPs instead of raw
disk bandwidth.

Go ahead and run:

    
    
      $ sudo du -hs /*
    

..on a vanilla m3.* instance and run iotop in a different session. You'll see
bandwidth numbers that 2002 would be embarrassed about.

~~~
correctdarecord
Bandwidth available for EBS volumes varies. For instance, with EBS-optimized
volumes it can be 500Mbps, 1Gbps, 2Gbps, or 10Gbps, depending on the instance
type, as shown in this chart from an Amazon presentation:

[http://cl.ly/image/440b142t0T1x/Screen%20Shot%202014-12-18%2...](http://cl.ly/image/440b142t0T1x/Screen%20Shot%202014-12-18%20at%2013.16.23.png)

The "NA" of the 8xlarge instance types is because there's no ebs-optimization
as an optional feature on that instance type; you automatically get access to
10Gbps.

Here's a video of the whole talk:
[https://www.youtube.com/watch?v=3OH4-Hx3tlE](https://www.youtube.com/watch?v=3OH4-Hx3tlE)

Here's the slides:
[http://www.slideshare.net/AmazonWebServices/stg302-28617072](http://www.slideshare.net/AmazonWebServices/stg302-28617072)

------
photograve
Anyone has a performance benchmark and/or has experience on the scalability on
this version?

~~~
tomiko_nakamura
That really depends on what version you're using now, and what exactly you
mean by scalability.

If you're using 9.1 or older, you may see a significant improvement in OLTP
workloads on many-core machines (making it linearly scalable to >64 CPUs).
This happened in 9.2 (i.e. ~2 years ago).

The main improvement in 9.4 I'm aware of is the GIN fastscan, which
significantly improves performance of applications using GIN indexes (e.g.
full-text).

Of course, there are many other performance improvements on various places -
the principle is not to make the new version slower.

Some interesting numbers were presented in this talk at pgconf.eu 2014,
including 9.4 beta (but there should be no significant differences):
[http://www.slideshare.net/fuzzycz/performance-
archaeology-40...](http://www.slideshare.net/fuzzycz/performance-
archaeology-40583681)

------
sarciszewski
Best news I've heard all week. Awesome. Time to update my servers :D

------
ageyfman
we currently use 9.4 beta, and it's been rock solid for us. We chose it
because of the jsonb data type. JSONB has been a great fit for the type of
work that we needed it to do.

------
curiously
I just recently began using PostgreSQL albiet an older version. Does this 9.4
mean that MongoDB is now pretty much a dud? Being able to store, manipulate,
query JSON data AND have SQL on a established wheel that have been proven
reliable and polished far longer than the age of most other databases?

Are there any code examples (preferably Python) that show how to use JSONB?
I'd love to see some examples on how to query every record that contains a key
in a json, or order rows based on a value in a json object.

off topic: If Meteor.js implements PostgreSQL 9.4 I would seriously consider
using it again. That and maybe make DDP scalable.

~~~
tomiko_nakamura
I'm not a big fan of MongoDB, but I don't think the introduction of JSONB in
PostgreSQL 9.4 makes it a dud (which does not mean MongoDB is not a dud for
other reasons).

JSONB allows you to do a lot of things that people are often doing with
MongoDB (or document databases in general), but there are still some features
not available in PostgreSQL. Built-in sharding, for example. There are
external tools to do that with PostgreSQL, and I do have my doubts about
MongoDB (partially because I only hear about the horror stories), and I expect
similar features in PostgreSQL 9.5 / 9.6, but at the moment it's not there.

Not sure what you mean by Python examples - you can either fetch the data as
'text' and convert it in the application (e.g. json.loads) or just use
psycopg2 with an adapter
([http://initd.org/psycopg/docs/extras.html](http://initd.org/psycopg/docs/extras.html))
and you'll get the data as Python dictionaries.

The best source of examples is the official documentation
([http://www.postgresql.org/docs/9.4/static/datatype-
json.html](http://www.postgresql.org/docs/9.4/static/datatype-json.html)) and
the "NoSQL on ACID" training from Bruce Momjian and Thom Brown
([https://wiki.postgresql.org/images/d/de/NoSQL_training_-
_pgc...](https://wiki.postgresql.org/images/d/de/NoSQL_training_-
_pgconf.eu.pdf)).

------
hype_this
What a joke. There's not even a working MySQL => Postgres 9.4 migration tool
that works. Try them.

On paper, Postgres sounds great. But of all the people on here cooing about
it, how many are actually using the tool?

I don't know who has the stronger hype machine on Hacker News: Postgres or
Rust.

~~~
tomiko_nakamura
We certainly are. We're operating an analytical service operated on PostgreSQL
- tens of TBs of data, hundreds of machines, tens of thousands of clients.
Initially it was running on MySQL, but because of various reasons we migrated
to PostgreSQL ~3y ago and never regretted that.

The reasons were both technical (better performance with this kind of
workload, great reliability, excellent code quality, ...) and political (we
have contributed numerous patches to PostgreSQL - not sure if you ever tried
to do that with MySQL).

If you think there's a simple MySQL -> PostgreSQL migration tool (or a
migration between arbitrary databases), you're foolish. Databases are not that
interchangeable - all databases have some issues with specific workaround, and
the 'good bits' are database-specific too. And those things are anchored in
the application code, so if you think there's a simple migration tool, you'll
be disappointed.

