examiner.com is switching to Drupal soonish and has funded most of the work to write the Drupal/MongoDB integration.
As an admittedly rough measure, check out the number of "watchers" on github for each project's leading plugin:
1106 - MongoMapper
536 - CouchREST
312 - Cassandra
I've only built one rails app with nosql in it, but at first glance at least Mongo seemed the most fully baked of the group.
695 - Mongoid http://github.com/durran/mongoid
seems to be the Rails/Mongo project to watch. They got a slick looking homepage too: http://mongoid.org
Hoping to try using it on my next project.
It's not quite as mature, but more documentation updates are currently in progress.
What didn't you like about MongoMapper and/or Mongoid? Starting from scratch is a big job.
MongoMapper in particular has come a long way since then but still has a few issues in my mind. For example, attributes are converted to/from their mongo representation every single time they are accessed, rather than only when the model is saved or loaded. I also believe MongoModel has a nicer model for typecasting property values (see http://gist.github.com/287379 for an example), and I disagree with the use of has_many associations for embedded collections.
MongoModel isn't perfect either, but it has been the path of least resistance for me in getting my app running on MongoDB.
Here's a pretty thourough comparison: https://wiki.basho.com/display/RIAK/Riak+Compared+to+MongoDB
Note: written by the authors of Riak, but MongoDB contributors chimed in on the comments.
"MongoDB supports atomic operations on single documents. MongoDB does not support traditional locking and complex transactions for a number of reasons. . ."
What made you think that it was "locking all databases when a write is occurring"?
It's not ideal, but it is an improvement.
If only they had an equivalent of innodb as a storage engine, it would be awesome
Note that whilst a write blocks all other writes, the time you need to wait is the time it takes for the in-memory data structure to be modified; you don't have to wait for the write to hit disk. (Writes are only persisted to disk every so often.)
Any idea how replica sets / shards might fit into this picture?
I would love to answer questions if I could figure out how to easily see my replies on HN :/
Are you experiencing data loss with 1.4.0 ?
Reasons (from my understanding) that you had a bad experience:
1. You used a development, unstable version of software in production.
2. 32 bit Mongo cannot store more than 2GB of data, this is a very public, known admission.
1. This is true (there was no warning that I saw when I was downloading it, although one was added later, or maybe I didn't notice it). However, I upgraded to the stable version when it came out to give MongoDB a second chance, because it sounds very good in theory, and I had the same (if not bigger) problems.
2. I hadn't known about it, and, no matter how pubilc it is, the server could just refuse to store more data. Silently corrupting two documents for every document inserted is inexcusable, even if your database was forged in the pits of hell.
I think your point about the poster child thing is true. While I meant my post as a sort of "MongoDB didn't look too production-ready to me, but I hope it gets there eventually", people became really polarized and took it either as "this guy is right, MongoDB sucks" or "this guy is an idiot, silently corrupting data is perfectly acceptable if there's a notice on the website"...
* The corruption of data might be excused in the development branch.
* The silent corruption of the data when the server goes past its limit for a 32-bit DB cannot be excused, since the server could die, at the very least.
* The corruption of data due to the process being killed because the connection dropped wasn't MongoDB's fault.
* Requiring 9 GB of RAM for 5 indexes doesn't sit very well with me...
* Silently corrupting data for me to find out days later is not something a stable, unstable or toy database should do...
That said, if you need powerful "full-text searching" I would look to something like Lucene or Xapian.
Looking forward to single server durability in the next one (1.8). Should enable me to convince more clients to add mongo as part of the deployment stack.
A year ago I was equally enthusiastic about MongoDB, CouchDB, and Cassandra. However, at least for the modest scale work that I do I don't really need Cassandra, and MongoDB is so easy to work with. I still really like CouchDB but I have never had a customer request its use, so my experience is limited to just using it for my own stuff.
Hmm, now all there's left is an online hosting solution like Couch.io
I described the steps here:
Secondly what scares me and I find hilarious is people who so quickly jump onto this movement, moving the entirety of their critical data without understanding the potential downfalls, such as data loss with no warning. As a key value store for non critical data, this kind of thing is brilliant, and data loss can be managed, maybe not tolerated in a high throughput environment where "cache misses" are a concern but otherwise yea its great. Still look at facebook, twitter, friendfeed, who are all still using mysql and scaling out in their own way.
Also, all of the official client libraries for mongo are Apache licensed, it's only the core server that is AGPL (which means that you can use mongo in closed source applications - only if you make changes to the actual core database do you have to give those changes back to the community, and even then you still don't have to open source the rest of your application).