

JSON will be a core type in PostgreSQL 9.2 - zmanji
http://people.planetpostgresql.org/andrew/index.php?/archives/244-Under-the-wire.html

======
samstokes
This is nice, but it's a shame that after going to the trouble of writing a
JSON parser in C and incorporating it into the main Postgres codebase, they
didn't go one step further and add a json_project_key function that pulls
values out of a JSON blob. That, combined with functional indices, allows you
to store JSON blobs in your database _and_ index on values inside them, or put
another way, get the data model benefits of a NoSQL database [1] without
sacrificing the mature Postgres internals.

The Heroku Postgres guys have been playing with this idea [2] using the PL/V8
plugin, which embeds Javascript as a supported language inside Postgres (and
thus makes it trivial to implement the json_project_key function), but if
Postgres is going to natively support JSON parsing then it shouldn't take an
addon module to achieve this.

[1] Attempting to forestall the thread-jacking: I know NoSQL databases have
other benefits besides their data model, but for some applications that's
certainly _one of_ the benefits. [2] <https://gist.github.com/1150804>

~~~
megaman821
I can't help but think this is just step 1 in bringing native support for JSON
indexing and querying.

~~~
masklinn
On the other hand, as far as I can see XML support still does not allow for
indexing or querying of the document itself, at least in core, even though
it's been a postgres datatype for a long time.

------
untog
Interesting. Being able to output the results of a query as JSON will be
useful when running something like Node.js. I can't shake my feeling that
something like this doesn't belong in the SQL statement, though- choice of
output format seems like it's something connecting libraries ought to do.

That said, having seen 'core type' I instantly imagined being able to query
based on JSON properties, which doesn't appear to be the case. Not surprising,
because it would be a huge amount of work.. but it's nice to imagine.

(before anyone says anything- yes, I know NoSQL exists. But a hybrid solution
using Postgres would be very interesting)

~~~
conesus
You might be interested in postgresql's hstore:
<http://www.postgresql.org/docs/9.0/static/hstore.html>

It's a key-value store that allows querying, which is what I think you are
lamenting in your comment. Here's a bit more about how to query and index with
hstore: <http://lwn.net/Articles/406385/>. It's pretty simple and doesn't have
anywhere near the number of querying possibilities that MongoDB has, but it
can be used for on-the-fly column names (similar to JSON). You can only query
on the root node's children, unlike the open-ended possibilities in NoSQL.

~~~
untog
Oh, that is very intersting, thanks!

I'm (quite happily) tied to Postgres because I'm using PostGIS, but the
ability to add freeform data to a location would be ideal. Looks like I may
already have a solution here.

~~~
wlievens
Cool, PostGIS! What are you building? I worked as GIS engineer in 2010.
PostGIS is pretty cool technology.

~~~
untog
I'm building a 'taxi tracker' app, for want of a better term- gives you
estimated trip times, allows you to share the journey with other people so
they know where you are.

That part doesn't require PostGIS as such, but it's for an NYC city government
app competition, so we have all sort of city datasets to use. I've used
PostGIS to help make custom map tiles (preview at
<https://twitter.com/#!/taxonomyapp/status/149565007384940545>), highlight the
outline of the building you're heading to... all sorts. It's been a fantastic
learning exercise.

~~~
mathias_10gen
I don't know exactly what your GIS needs are, but it might be worth seeing if
MongoDB's geospatial indexing meets them if you are interested in querying
free-form data.

<http://www.mongodb.org/display/DOCS/Geospatial+Indexing>
[http://www.mongodb.org/display/DOCS/Geospatial+Haystack+Inde...](http://www.mongodb.org/display/DOCS/Geospatial+Haystack+Indexing)

------
wulczer
For the record, the title is misleading and Andrew Dunstan (the guy whose blog
this entry links to, a major PostgreSQL contributor and one of the people
involved in having a JSON type in Postgres) clarifies:

[http://people.planetpostgresql.org/andrew/index.php?/archive...](http://people.planetpostgresql.org/andrew/index.php?/archives/245-I-didnt-
say-that.html)

------
kemiller
My first reaction to this is - "Ugh. Haven't we been down this road with XML?"
I don't want the format du jour baked into my datastore. Now, if it had a
field that stored some sort of abstract lists-and-hashes structure (sort of
redis-in-a-field), that sounds more interesting to me. I realize JSON is
isomorphic to just that, but it still seems like flavor of the week.

~~~
elehack
Look at PostgreSQL's hstore data type - that's pretty much what you're asking
for.

~~~
jaylevitt
hstore is all kinds of awesome, but one limitation is that the values are pure
text. You can nest an hstore inside another, but only if you're smart enough
to know that it SHOULD be treated as an hstore; as far as Postgres is
concerned, it's text that happens to contain '=>'.

If JSON support means first-class type support in a nested object, that's a
huge leap forward.

------
heyrhett
Pretty neat. I think this is the important bit though: "Basically, his patch
just parses the text to make sure it was valid JSON, and stores it as text."

So, if you already have an environment where you are certain you are inserting
valid JSON, it's not much different than just using the text type today.

~~~
jacques_chester
> So, if you already have an environment where you are certain you are
> inserting valid JSON, it's not much different than just using the text type
> today.

Today, yes.

Tomorrow, when you launch an API and suddenly hundreds of different apps are
talking to your systems, no.

Putting rich descriptions of data right next to the data is a good thing in
the long run.

------
igrigorik
I think it's worth mentioning that MySQL has a nice collection of UDF's to
handle JSON object output, arrays, and so on:
<http://www.mysqludf.org/lib_mysqludf_json/index.php>

------
leeoniya
it would be super awesome if JSONH <https://github.com/WebReflection/JSONH>
(json for homogenous collections) was standardized and adopted. it would be a
great csv replacement and a perfect fit for recordsets, providing minimal
overhead.

------
bdfh42
Erm... JSON is pure text - so not much of an innovation in truth - I can store
it in SQLight right now.

~~~
TylerE
I think you're missing the point. With this feature you'll be able to write a
query that returns a JSON string of the results, possible as a field in a
larger result set. It also validates that the JSON parses on the way in.

~~~
bdfh42
Thanks for the help - so it is a "result type" (not so much core) - OK that
will save a few lines of code - unless I have to write as many lines to handle
result sets that do not parse...

I suspect I am not a big fan of abstractions.

~~~
julian37
I'm with you, this smells of feature creep. I don't see a big benefit over
serializing the result set on the client. This is literally a one-liner if
you're using an ORM or something like PEP 249.

The new constraint allowing you to ensure the string stored is valid JSON
sounds useful though.

~~~
jeffdavis
> I'm with you, this smells of feature creep.

There are good reasons to have JSON support in the DBMS, such as querying and
indexing on the fields within a JSON document.

My only criticism is that postgres has a great extensions mechanism, and it
would be better if effort were spent improving the ecosystem around that. I
can't think of any reason for it to be in core if installing an extension were
a trivial and widely accepted practice.

But, it takes time to really develop that ecosystem. And there are users that
want JSON support yesterday.

EDIT: from a technological standpoint, there is no reason why it can't exist
as an extension. PostGIS is much more sophisticated on all counts, and it's an
extension.

~~~
fdr
I agree with you insomuch as it's ideal for an extension, and argued for its
inclusion as a 9.3 feature (looks like Christmas might come early, though).
However, I concluded with this:

    
    
      Right now the perception of Postgres...actually, databases in general,
      including virtually all of the newcomers -- is that they are
      monolithic systems, and for most people either "9.3" will "have"
      javascript and indexing of JSON documents, or it won't.  In most cases
      I would say "meh, let them eat cake until extensions become so
      apparently dominant that we can wave someone aside to extension-land",
      but in this case I think that would be a strategic mistake.
    

[http://archives.postgresql.org/pgsql-
hackers/2011-12/msg0078...](http://archives.postgresql.org/pgsql-
hackers/2011-12/msg00782.php)

So this is definitely a social problem. Why not move UUID out of the core?
(not uuid-ossp, which implements generation, but uuid parsing and storage)
Because, for now, one cannot assume extensions for really common useful
functionality.

Luckily, I think things are on the right path to at a future juncture that
even a commonly desired data type can live as an extension.

