

MongoDB rocks my world - meghan
http://blog.pythonisito.com/2011/10/mongodb-rocks-my-world.html

======
wtf242
We have been using MongoDB in a production environment for almost a year and
half now. It's great but there are some serious issues that you should know
about.

1\. writing locks the database. By database I mean the ENTIRE mongodb
instance. 2\. a query can only use 1 index at a time. You can create multi key
indexes though. 3\. replication is more complicated than it seems. We still
have issues with the replication not working correctly when an instance fails.
4\. A limited number of indexes per collection. I think the current limit is
64.

~~~
rick446
Writing does lock the database (or at least the shard), but if the page is in
RAM, that means you're locked for the duration of a write to memory, which is
inconsequential. The problem comes when you try to write to a page that's not
resident. In that case, you can end up (worst case) having to write a dirty
page to disk to free up a slot, load the page you _want_ to write, and then
write it.

This can be really time-consuming, so one "fix" is to retrieve a document
before writing it (thus guaranteeing it will be resident in RAM). More recent
versions of MongoDB (2.0 on IIRC) also try to yield the write lock before
faulting on write to avoid this problem (though it doesn't work 100% of the
time).

Oh, and one other thing to be aware of is that anything that uses the
Javascript engine in MongoDB is going to use the spidermonkey global
interpreter lock, so you probably want to avoid those things if performance is
a concern ($where, .group(), .mapreduce(), etc.)

~~~
bretthoerner
In your best _and_ worst case scenarios for the lock you forgot that MongoDB
will also need to write a journal entry by default (1.9.2+).

~~~
mathias_10gen
Writing to the journal is actually done outside of the write lock most of the
time. It does hold a readlock, however as of 2.0 most commits will release the
lock before doing any disk I/O.

------
james33
I've been using MongoDB with a custom ad server I built over the summer that
is hitting 10,000 impressions per second at peak ties. This is all on one box
and it hasn't broken a sweat yet. Because of that positive experience, we have
decided to use MongoDB for all future projects instead of MySQL (we make
multiplayer games).

~~~
swah
Nice data point. What is the rest of the stack?

~~~
james33
Linux, Nginx, PHP

------
hackDaily
I've been using MongoDB for about 5 months now and can say that it is an
excellent tool. I've been tinkering with building a very fast search engine
for my latest app by combining MongoDB with ElasticSearch and it's been a
wonderfully pain-free experience. While I certainly believe in picking the
right tool for the job, I don't see myself going back to SQL anytime soon.

~~~
hackDaily
It's really straight-forward for simple usage. There's no official river for
mongoDB, so for now, since I've got very small documents, I just save in
MongoDB, and then index an even smaller searchable document in ElasticSearch.
My front-end talks to ElasticSearch through a simple node/expressjs route,
which was made far simpler by this NPM module -->

<https://github.com/rgrove/node-elastical>

I apologize for using the term "combining" as it's slightly misleading. I
actually use ElasticSearch for search functionality only. I use MongoDB for
the standard DB stuff (read, update, etc...)

------
malbs
When 10gen finally push out the lock per collection changes, it will improve
mongodb significantly. We have a couple of collections that are heavy write,
and a couple of others that are heavy read, and we can't run them on the same
instance due to lock contention - we have to run two separate mongod
processes, because the global lock kills performance.

As long as you understand that issue, you can work around it, and there are
future changes coming that will address it so that will be fantastic

------
djb_hackernews
I REALLY want to try mongo for a side project I am building, however the
single server durability is an issue. My data could easily fit in
mysql/postgres however I am storing data that fits better in a document store.
I am inclined to just use postgres for now because it's sort of silly to set
up/pay for multiple servers for such a small project.

UPDATE: Apparently my information is out of date. Hopefully my confusion (and
the answers below) helps someone else. Thanks.

~~~
rick446
MongoDB has single server durability since 1.8 with the journal. If you put
the journal on an SSD, you can even get it almost for free performance-wise.

~~~
rick446
Yeah, I guess I should make the point that if you _don't_ put the journal on
SSD (and you only need a few gigs of SSD to journal terabytes of spinning disk
storage), you _will_ see significant slowdown.

------
gorm
Only thing I don't like with MongoDB/Node is that it requires a VPS and can't
be deployed on a shared hosting environment. Or is there a company that can
host MongoDB/Node combination at an affordable price today?

~~~
shykes
dotCloud runs a fully replicated MongoDB _along with your entire application_
\- no need for a separate add-on provider.

<http://docs.dotcloud.com/services/mongodb/>

