
MongoDB 3.2 – A First Forward Look - thomcrowe
https://www.compose.io/articles/mongodb-3-2-a-first-forward-look/
======
nevi-me
Very excited about the release. Schemas are something that ODBC connectors are
doing already, although they are essentially normalising the collections into
tables, sometimes looks ugly, but I suppose the aim's mostly to plug that into
BI tools like Qlik, Tableau and friends.

I'm excited about the improvements that are being made in the Aggregation
Framework, $lookup sounds interesting too, and hopefully it lands in 3.1.6 so
we can start trying it out. For me the key takeaway from interacting with the
MongoDB team on what's on the pipeline, it's been that I should invest more on
the Agg Framework for some of my more complex workloads.

WiredTiger will be the default from what I heard, I've seen that the TokuTek
people have slowed down with TokuMX and are building their own pluggable store
into MongoDB.

~~~
portmanteaufu
According to a recent mailing list discussion [1], a new release of TokuMX is
due by this fall. They are indeed building a pluggable storage engine, but the
(very young) pluggable storage API limits the performance and feature
improvements TokuMX SE is able to make.

[1] [https://groups.google.com/forum/#!topic/tokumx-
user/3uZzqURu...](https://groups.google.com/forum/#!topic/tokumx-
user/3uZzqURuGwY)

~~~
nevi-me
Yea, and I suppose it adds some complexity as new features have to be usable
by storage engines. I can't remember what the JIRA ticket was about, but I've
seen that there are some edge cases that have to be considered for the
pluggable store API.

------
jasondc
Support of "read committed" is another big improvement coming in 3.2:
[https://jira.mongodb.org/browse/SERVER-18022](https://jira.mongodb.org/browse/SERVER-18022)

~~~
nevi-me
I wonder if this was sparked by the Jepsen thing on read commits. I hadn't
come across the ticket, looks interesting.

------
skrowl
Schemas in my schema-less database? Why not just use MS SQL / Oracle / MySQL?

~~~
nevi-me
I think it's schemas in the sense that you start enforcing some rules at the
database level for your JSON/BSON/JSONB data, instead of relying on ORMs.

Defining schemas at an ORM-level means that you always have to replicate that
schema (as far as you need to write back to the DB) on every project that uses
the data in the database.

The JSON storage route has been flexible, and I would assume that most people
who use Mongo have some form of schema, even if it's very flexible. For
example, a lot of my schema definitions look like:

    
    
      {
        field1: String,
        field2 : {}
      }
    

where field2: {} means I can pretty much dump anything in there.

I presume (without yet testing) that by 'schema' I'd still be able to do the
above, except that I'll enforce field1 to be a String and still remain
'schema-less'/loose on field2

