Once this work is released, PostgreSQL will be faster than current versions of MongoDB at carrying out queries for documents which look for a given value at a given path (or perhaps more generally, values satisfying a given predicate at paths satisfying a given predicate).
But that has never been one of MongoDB's important strengths, has it? The operations for which MongoDB is optimised are inserting documents, retrieving document by ID, and conducting map-reduces across the whole database. Yes, it has the ability to query by values in leaves, and it can use indices to make that faster, but that's a bit of a second-tier, bag-on-the-side feature, isn't it? If that was the main thing you needed to do, you wouldn't have chosen MongoDB, right?
Map/reduce? Isn't that slow in general? Can't PostgreSQL JOIN and aggregate functions do most of what map/reduce is for but faster?
How can postgres possible be faster on insert? Unlike Mongo, Postgres actually needs to write bytes to a disk (with a WAL!).
Postgres rocks. 
Not with unlogged tables it does not!
> Specifies whether transaction commit will wait for WAL records to be written to disk before the command returns a "success" indication to the client.
> This parameter can be changed at any time; the behavior for any one transaction is determined by the setting in effect when it commits.
It also means that you can be sure that the data you read from an unlogged table is that data that you put there. I much prefer this over the possibility of maybe reading corrupt data without any possibility of detection.
In my case, the use-case of an unlogged table is a table that practically works like a materialized view (until postgres learns to refresh them without an exclusive lock and maybe even automatically as the underlying data changes). If the table is empty, I can easily recreate the data, so I don't care about any kind of WAL logging or replication. If the table's there, perfect. I'll use it instead of a ton of joins and an equally big amount of application logic.
If the data isn't there, well. Then I run that application logic which will cause the joins to happen. So that's way, way slower, but it'll work the same.
The advantage of using an unlogged table here is that it's cheaper to update it and it doesn't use replication traffic (unlogged tables are not replicated).
I'm now immune to MongoDB bashing now but jesus keep up with updates. This isn't true as of maybe two years ago.
We do want Jesus to keep up with the updates, for sure, but this isn't just about a random slip-up, goes a bit deeper than that. This wasn't an unfortunate bug, "oh we meant to do all we could to keep your data safe" it was a deliberate design decision.
Dishonesty and shadiness was the problem. When finally they started shipping with better defaults, did we see an apology? Was anyone responsible terminated? Nope. Don't remember.
Now, it could all be a statistics fluke and people randomly all decided to bash MongoDB, say, more than Basho, Cassandra, PostgreSQL, HBase and other database products? Odd isn't it. Or, isn't it more rational, that it isn't quite random, there is a reason for it.
When developers sell to developers, it is expected a certain level of honesty. If I claim I am building a credit card processing API but I deliberately add a feature that randomly double charges to gain extra performance, you should be very upset about. Yes, it is in the documentation, on page 156, that says in fine print "oh yeah enable safe transaction, should have read that". And you as a result should probably never buy or use my product and stop trusting me with credit card processing from them on.
The bottom line is, they are probably good guys to have beers with but I wouldn't trust them with storing data.
Shady? Yes. Dishonest? No. To be sure, both of these attributes suck, but beyond that there is an absolute world of difference between the two. One passively leaves knowledge gaps that even novice consumer programmers would uncover (the source is wide open after all, and speaking from personal experience, any claims of hyper performance should arouse enough suspicion to inspire some basic research), while the other asserts no knowledge gaps and actively inhibits knowledge discovery.
That said, they fell very far form the ideal, and for that they should be faulted. They're now paying the price by living in a dense fog of distaste and disgust. I don't think anyone believes it's a "statistics fluke" but it's a likely that a mob/tribal mentality plays some role (this is an industry notoriously susceptible to cargo cults) and that many individuals on the burn-MongoDB-at-the-stake bandwagon do not actually have deep--or moderate!--experience with it and competing products. Also, would you chalk it up to statistical anamoly that they were able to close funding and that many large scale systems use their product?
Isn't it possible that the company had shameful marketing practices that repel the programmer psyche, but that the product itself has a unique enough featureset (schemaless yet semi-structured querying, indexing, dead simple partitioning and replication, non-proprietary management language, simple backups, all at the cost of RAM/SSD) to make it compelling for some problem classes?
That same blog notes Postgres is susceptible to the same issue: http://aphyr.com/posts/282-call-me-maybe-postgres
Put another way, I see map/reduce as a background task - building indexes, reporting, etc. I don't think speed is as important there as it is for a foreground task like GET, PUT and DELETE.
For these operations, especially by key, Postgres has always been right up there - what's happening now is that Postgres is getting really good at NoSQL features - documents, key value pairs, deep querying etc - good enough that if you have Postgres you can do pretty much whatever you could have done with Mongo - while still making use of everything else it has to offer.
That said, if Postgres matches/exceeds enough Mongo features, it could certainly outshine Mongo for those scenarios.
AFAIK, map-reduce is relatively expensive on Mongo... typically recommended only for background execution, nothing synchronous.
Easy replication ... that's why I use MongoDB.
This makes me very, very happy.
I should point out why:
-my document structures pretty much match 1:1 with hstore now (we have nested arrays of hashes)
-i can query & report on the data with normal tools
-i can update individual fields on my data (currently impossible)
-i can do relational joins & such-like in the db (currently impossible)
-i can have sensible transactions