

MongoDB 2.2.0 Released - francesca
http://www.mongodb.org/downloads

======
cmer
My experience with MongoDB hasn't been the most pleasant in a write-heavy
environment. Until they fix the write lock properly, MongoDB is pretty much
useless for many high throughput applications in my opinion...

The new DB level locking introduced in this release is a joke. There's not
much difference between that and the old global write lock unless you split
your database in dozens of smaller ones. What a pain. I wish they'd just stop
pretending and addressed the issue properly once and for all.

I really want to like and use MongoDB because the way data is represented and
how it can be queried is awesome.

~~~
dmytton
It's inaccurate that there's no difference between the global lock and
database level locking. Whilst it's true that locking is now down at the
database level and you get benefits from splitting into multiple databases,
the real benefit is from the new PageFaultException architecture. Even using a
single database you will see significant performance improvements.

I benchmarked this at [http://blog.serverdensity.com/goodbye-global-lock-
mongodb-2-...](http://blog.serverdensity.com/goodbye-global-lock-
mongodb-2-0-vs-2-2/) and there is a video explaining how this works at
[http://www.10gen.com/presentations/concurrency-internals-
mon...](http://www.10gen.com/presentations/concurrency-internals-mongodb-2-2)

~~~
cmer
That's good to know. It still feels like an half-assed solution to a very
serious problem, however.

I definitely won't be looking at MongoDB again until this is fixed for good,
and until I know replication is more reliable. Had terrible problems that that
too unfortunately.

~~~
tommoor
It's not half-assed so much as a first step. 10gen have previously stated that
the plan is to gradually increase the granularity or write-locks over
successive releases.

~~~
eckyptang
Until they end up with MVCC...

~~~
mh-
aggregation and then MVCC.. the circle is complete.

------
ranman
Awesome release,

notes summary:

• Aggregation Framework to fix some map-reduce woes.

• TTL Collections

• DB Level Locking (A step in the right direction)

• Better yielding on page faults

• Tag aware sharding (HELL YES)

• Better Read Prefs

• Indexes now handled by mongodump/mongorestore

• mongooplog replay is awesome for getting point in time backups

• Shell now has full unicode, multiline command history, $EDITOR support (all
from change to linenoise.c

~~~
cagenut
could you go into why tag aware sharding is a "HELL YES" for you? I was kindof
already of the opinion that mongo sharding is much too complicated
(<http://i.imgur.com/c3Dpq.jpg>), this seems to compound that. Is it something
you've needed/wanted so bad that its worth it?

------
_jmar777
I've been playing around with the Aggregation Framework lately (using the
release candidate). The performance seems to be pretty reasonable, especially
when compared to similar tasks with the old MR framework. A quick and dirty
benchmark number in case anyone is interested:

* Obligatory unscientific, probably not meaningful, etc. disclaimer.

 _Mongo Version:_ 2.2.0-rc1

 _Hardware:_ MBP, Snow Leapord, 2.2 GHz Intel Core i7, 8 GB mem

 _Data:_ Single collection with 500k records (machine generated time-series
event data)

 _Query Pipeline:_

    
    
      [
          {
              $match: { ts: { $gte: 1293858000000, $lt: 1296536400000 } }
          },
          {
              $group: {
                  _id: 'aggregations',
                  sum: { $sum: '$foo' },
                  num: { $sum: 1 },
                  avg: { $avg: '$bar' }
              }
          }
      ]
    

_Results:_ The time range matched against above matches 42,466 documents
within the collection. The average response time over 50 runs is 419ms. Not
exactly "Big Data OLAP" stuff just yet, but plenty fast enough for most use
cases involving reasonably small sets of data. Great job to the MongoDB team!

~~~
mathias_10gen
Out of curiosity, did you have an index on the ts field that you are matching
with?

By the way, if you (or anyone else for that matter) come up with useful
benchmarks I'd love to get a copy of them at mathias@10gen.com. I have a few
of my own, but I'd like to get some real-world workloads from the community to
test potential optimizations against.

~~~
_jmar777
Yes, I had a single index:

    
    
      db.events.ensureIndex({ ts: 1 });
    

I'll try to clean up my benchmark code a little, throw it in a gist, and then
I'll send it your way.

------
icey
Release notes: <http://docs.mongodb.org/manual/release-notes/2.2/>

~~~
izak30
Thanks. I clicked the "Changelog" link on OPs page and that was worthless link
to jira <https://jira.mongodb.org/browse/SERVER/fixforversion/11496>

------
efbenson
For startup projects I love Mongo because we can get a product up and running
very quickly. However I always feared in the back of my head we would have to
move off of it if our service got too big. Maybe it is all the complaints from
a small portion of heavy users. Regardless, big updates like this are going a
long way to help make me feel content on continuing to use it as we grow.

~~~
rgnitz
With MongoDB, you are able to shard your system. This means you can grow your
databases horizontally. This is not something you can easily (or cheaply) do
in the world of RDBMS. You will see much better scalability with MongoDB than
with something like MySQL.

~~~
rademi
You can shard RDBMS relatively easily -- basically you wind up pushing a part
of your database structure into your clients, so your clients can decide which
shard to use.

The cost, though, is that you wind up having a difficult time doing some
things that MongoDB can't do. (For example: Renormalizing your database...
Does that even mean anything for MongoDB?)

There's something to be said, of course, for simplifying your design. But it's
probably a good idea also to make sure your design reflects your requirements.

~~~
rgnitz
Indeed you can distribute data to multiple independent RDBMS, but balancing
when new nodes are added is probably a manual process (or a lot of custom
code) that is likely to require downtime. To avoid downtime, your application
would need to write to both chunks while it is balancing/migrating (and then
delete the old data/chunks once it is migrated to a the RDBMS). Essentially,
you would need to write what is already in MongoDB.

You would also have to write a parallel query engine.

I too am a fan of simple designs, but I think rolling your own sharding on top
of a RDBMS would likely be a massive chunk of time.

There are really expensive commercial products working on horizontally scaling
RDBMS... but personally, I prefer open source and document oriented databases
:-)

------
dkhenry
In all this talk of locking were missing the real benefit to a heavy write
environment, better yielding. this is going to be huge for those one off
writes that in the past would have held the lock. Also I am looking forwards
to playing with the new aggregation framework.

------
prax2
There is one thing holding me back still from mongo or I'd be using it right
now instead of Postgres: native decimal support [1].

There are a lot of proposed work arounds, and some surely work fine if you're
only dealing with 2 decimal currency.

The solutions don't scale for arbitrary precision based on the field.

The new aggregation framework is a fantastic step forward and from my testing
is relatively peppy, even at a 150k document collection.

Side note: Anyone know a nosql solution that doesn't treat decimals as floats?

[1] <https://jira.mongodb.org/browse/SERVER-1393>

~~~
thijsc
A native decimal datatype would be awesome indeed.

------
eranation
The Aggregation Framework is a great feature, already in production...

~~~
encoderer
This does look cool. Aggregation previously was a joke with its collection
size limitation.

I haven't used Mongo in a year. Are Map Reduce jobs still single threaded?

~~~
_jmar777
Yes.

------
taterbase
Does anyone know if Collection level locking is in the pipeline?

~~~
comerford
It is, it's next in fact, but it was decided to make sure DB level locking
worked first (and gets the bugs shaken out of it) before moving on to the next
level of locking. Dwight gives a decent description of the thinking in this
presentation:

[http://www.10gen.com/presentations/concurrency-internals-
mon...](http://www.10gen.com/presentations/concurrency-internals-mongodb-2-2)

------
lttlrck
Great news. We use MongoDB for call data collection, distributed control, and
as a firmware/application fileserver in for telecoms testing.

------
craigyk
The true secret to Mongo's awesomeness for me has always been the dev
experience. JSON documents, query by example, etc.: Awesome API.

------
throwaway54-762
A friend of mine wrote this a few months back:

[http://blog.engineering.kiip.me/post/20988881092/a-year-
with...](http://blog.engineering.kiip.me/post/20988881092/a-year-with-mongodb)

Almost all of it is still valid. "To be fair, the global write lock is now
JUST a DB level write lock. Living in the future guys."

------
meghan
Blog post with the details is up
[http://blog.mongodb.org/post/30451575525/mongodb-2-2-release...](http://blog.mongodb.org/post/30451575525/mongodb-2-2-released)

------
Ricapar
Still no support for Solaris on SPARC! :(

~~~
crcsmnky
I'm not trolling you when I ask this, I'm simply curious.

Of the open source projects you follow, how many still do builds for Solaris?
How often do you have to build them yourself?

~~~
Ricapar
Legitimate question.

Not many do. There are quite a few things I build myself if I really really
want to use it. A lot of the time it isn't worth it.

I would totally be using MongoDB for some projects (it fits the bill
-perfectly-), but my resources for those are usually limited to Solaris on
SPARC.

I do have an x86 desktop with Linux as my main PC at work, so I don't miss out
on all the fun completely.

However, I do mostly sysadmin stuff, so a lot of the things I use come in the
form of scripts, so it's often cross-platform.

Though I do come across some install scripts that just blatantly assume that
everything is running Bash on Linux. Those are fun.

I have very little expectation that the MongoDB team will ever push a non x86
version. They do a lot of optimisations deep inside that rely on architecture
specific things. But one can hope :)

------
skram
Awesome! Looking forward to seeing this propagate through the 10gen official
repos like yum.

~~~
skram
...and they're out!

------
roger043
Have had a great experience with MongoDB

~~~
raffpaquin
I use it for my ecommerce platform. Great engine!

~~~
luxede
Which e-commerce platform? I'm just curious.

------
tegansnyder
I can't wait upgrade.

------
michymi
Interesting

