Couch and Mongo are optimized for different use cases.
CouchDB is pretty much the only (open-source) game in town if you care about offline replication. It is also designed for extreme reliability and concurrency. CouchDB's programming model is designed to scale from a smartphone to a datacenter, and ops teams can scale it strictly in the HTTP domain.
MongoDB trades robustness and concurrency for faster serial performance for single clients. MongoDB's programming model is closer to MySQL or redis, so it isn't as big a leap for developers used to traditional 3-tier architectures.
I expect to see a lot of apps move between the two platforms as people start to find the sweet spot that they are interested in.
I saw an old talk you did where you were advocating simply using ajax (jquery i think) to query couchdb directly from the front end. Is this something you still recommend? I don't think mongos security model would allow this.
Unfortunately both are useless for mortals because there is no transactions and locks.
It's just impossible to write applications where uniqueness and real ACID transactions are required. Please don't send me to CouchDB bulk document API, I don't wanna dance with conflicts and inconsistency every time I am going to write simple application with user registration for example. The same for apps with direct purchases where you need to update quantity of item in stock. Just impossible. Solutions with "inventory tickets" sounds insane. Unique fields as an _id also sounds insane, because it's impossible to create complex "unique keys".
Both are cool, seriously, and you know. Lots of developers are going to use this software, but every time the lack of such features just rejecting these people.
1. Add real transactions to CouchDB/Mongo
2. Add unique indexes to CouchDB. IIRC Mongo has already.
3. Add map/reduce chaining to CouchDB
4. Dominate the world!
There are transactions in MongoDB, they just wrap one operation on one document. But that one operation can be quite complex.
Example, one op can pick a money account, decrement its balance, and push an entry to the list of outbound transfers. After a few more similarly atomic steps, you have a safe and restartable money transfer.
The upside of working with this extra complication is that your data scales via sharding far beyond the maximum size of a MySQL installation.
ive been using mongo and tornado together for a while, and its great. i initially chose mongo over couch because of its fast ad hoc querying. the python client is also well-written and well-documented.
one thing, though, to watch out for is that only the "unstable" dev version of mongodb (1.3.x) has read concurrency - before 1.3.x, mongo uses a global read/write lock per operation. general and index-assisted reads are ultra-fast in mongo, but a bigger map/reduce or group call will block other requests until complete, possibly causing traffic to back up. because of that global lock, all writes block, too, but i've never had a problem with that IRL. writes are super-fast.
I 've been using Couchdb for a while, and now started experimenting with Mongodb. The latter seems more suitable for me as it seems I can atomically perform several operations at once. Couchdb on the other hand provides bulk updates, but the all_or_nothing option never fails so atomicity is not literally forced.