
MongoDB 4.0 will add support for multi-document transactions - cyberfart
https://www.mongodb.com/transactions
======
arghwhat
> "... making it the only database to combine the speed, flexibility, and
> power of the document model with ACID guarantees."

PostgreSQL with JSONB columns seems to have beat them to the race by quite a
wide margin. MySQL too, for that matter.

~~~
lilbobbytables
I do really enjoy using Postgres for a hybrid approach, using a jsonb field to
store a bunch of data about an object. It works very well, and query speeds
are great since you can index fields.

------
koolba
Following through the blog to the actual docs:
[https://docs.mongodb.com/master/release-
notes/4.0/](https://docs.mongodb.com/master/release-notes/4.0/)

> By default, multi-document transactions wait 5 milliseconds to acquire locks
> required by the operations in the transaction. If the transaction cannot
> acquire its required locks with the 5 milliseconds, the transaction aborts.

Automatic cancellation rather than actual deadlock detection is going to be
one hell of a footgun.

I'd argue this is a double barreled footgun as most usage of MongoDB is from
garbage collected languages. One wrongly time GC and your transaction is dead.

~~~
mayank
> I'd argue this is a double barreled footgun as most usage of MongoDB is from
> garbage collected languages. One wrongly time GC and your transaction is
> dead.

That’s not how it works. The transaction as a whole is sent to Mongo for
execution server side; the client isn’t manually controlling transaction
execution.

~~~
koolba
> That’s not how it works.

In the example on the docs page it looks like the logic is happening in the
app code: [https://docs.mongodb.com/master/core/transactions/#retry-
tra...](https://docs.mongodb.com/master/core/transactions/#retry-transaction-
and-commit-operation)

Granted it's only writing to some collections but I assumed you can read from
the session during a transaction.

> The transaction as a whole is sent to Mongo for execution server side; the
> client isn’t manually controlling transaction execution.

Are you saying that transactions are serialized as a series of pure updates
and sent to the server as such? i.e you can't read a value, use it for some
logic, update some other values, repeat ..., then commit? If that's the case
this would be better labeled as "Multi-document atomic updates" as (to me)
transaction implies interaction with the data in app code.

~~~
mayank
> In the example on the docs page it looks like the logic is happening in the
> app code

Not quite. The code in the docs you linked to handle what happens when a
transaction does not complete server-side -- typically, you want to re-try the
entire transaction a few times in case transient locks have been released or
preconditions met. It does not suggest that the transaction is being
controlled/orchestrated by the client.

> Are you saying that transactions are serialized as a series of pure updates
> and sent to the server as such?

Yes, generally. Check out examples of Postgres transactions -- they are plain-
text "queries" that are executed with all-or-nothing semantics.

> (to me) transaction implies interaction with the data in app code.

Transactions, generally, are groups of statements/queries that are either all
applied or none at all. They do not imply interaction with the data in app
code, unless the app code itself is executed as part of the transaction itself
(e.g., UDFs or stored procedures). They are like mini-programs that are
shipped to the DB to be executed in a concurrency controlled and undo-able
environment.

------
ngrilly
Now we're waiting for Aphyr to test this works as intended :-)

------
mlthoughts2018
MongoDB represents to me a very good way to build a product. There has always
been so much derisive criticism about MongoDB opting to prioritize convenience
of customer workflows above all else, and to go back and add best practices,
basic data safety, etc., on a piecemeal basis after the fact. Customers are
surprisingly willing to put up with problems as long as usability and user
experience is high, and they will wait for other features. Meanwhile, plenty
of other database projects may start out with a more deliberate focus on
classical database safety and guarantees, yet hardly build any customer base.

Even though I may like e.g. Postgres features more, there is still something
to be respected about how MongoDB has operated, and the constant vitriol about
their chosen priorities has always sounded hollow to me, even accounting for
stories about data loss, etc.

Incidentally, I once had the chance to tour the MongoDB office near Times
Square, and boy, I can tell you it is not an office environment for me.
Extremely loud, and they even have things like scooter parking slots and signs
for “scooter etiquette” for rolling around the office on a scooter.

I’m not sure how they are able to focus on any engineering work, but kudos to
them for finding a way.

~~~
jayd16
I won't agree that its a good way to build a product, but its a marketing
miracle that Mongo us used in production as widely as it is.

~~~
marenkay
Not trying to be sarcastic here but isn't this the case for most products in
their early years and even for some that are "old"?

Every offering has its' issues but most of the time there is one feature that
makes dealing with these worth it.

~~~
overcast
That's the point. Using relational data in a non-relational system is just
foolish, and introduces issues not worth dealing with. What exactly is MongoDB
going to give you over PostgreSQL in that case?

~~~
marenkay
Short-term: quick to get started. Long-term? A plan to migrate to PostgreSQL.

~~~
overcast
Honestly, spinning up a SQL server really isn't rocket science. SQL is easier
to read and write than that messy mongo query language. Though, if you're used
to taking the quick and dirty route for everything, you're probably using ORMs
anyhow. But why do things twice? Just do it right the first time.

------
jchw
This seems great, but I think until 4.2 they don't plan to have global point-
in-time consistency - just per replica set. I wonder how this affects ACID
semantics?

Also: This is going to be really nice, but I sure hope a major cloud provider
starts providing a managed service. It's very nice having a managed service
like Amazon RDS or Google Cloud SQL.

~~~
rvanmil
Our team has been using both shared and dedicated plans at mLab
[https://www.mlab.com](https://www.mlab.com) for almost a year now and we’re
very happy with the ease of use and we’ve had zero problems so far.

~~~
jaydestro
That's awesome but these features will not likely be on mlab at launch. You
may be best using MongoDB Atlas, which will include all 4.0 features.

~~~
rvanmil
That's probably true. I think it took mLab a couple of months to support 3.6.
But then again, having a major version release available at launch doesn't
seem that important to me compared to all other database-as-a-service
features.

Since you mentioned you work for MongoDB, if you guys could partner with
Heroku and add Atlas to their official add-ons our team might be able to take
a look and switch ;)

~~~
jaydestro
[https://www.mongodb.com/blog/post/integrating-mongodb-
atlas-...](https://www.mongodb.com/blog/post/integrating-mongodb-atlas-with-
heroku-private-spaces) \- private spaces works.

------
TomK32
Good bye MMAPv1. I like the changes about the date formatting and type
conversions. In my current project I shortly had values store both as
bigdecimal and float until I moved the calculations app-wards into Ruby.

------
gremlinsinc
Is there really anything Mongo does, that actually makes it worth choosing
over postgres or even mysql w/ jsonb?

I mean, I think if I needed to think beyond sql, a graph db like arango or neo
might make more sense...

~~~
lilbobbytables
I'm really interested in this as well. It seems that postgres with jsonb is
such an incredible generalist, that I don't see why if reach for mongo.

This is also because I think of mongo as generalist, which may or may not be
right.

It seems as though there are better choices for more specific use cases

~~~
gremlinsinc
the biggest use case (and it's a bad one), is because a stack (MERN/MEAN) and
all tutorials for said 'stack' use mongo, or a framework like Meteor... it
sort of locks devs into bad practices when being a little more picky, a little
more curious about options could make for a better architecture.

It's not like you couldn't rip out mongo in MERN and use PERN or MyERN (Mysql
Express React Node). There's some good libs/packages for using relational
db's, and the benefits may outweigh those of Mongo.

I guess one other use case maybe would be an incremental/idle game, where all
data is just stored as one big json doc, and you just need to connect/update
totals, then sync that data back and forth, with not a lot of
relationship/connections or transactional data.

------
sloankev
Can you use this across collections?

~~~
jaydestro
Hi - I work at MongoDB - Yes you can.

Documentation is located:

[https://docs.mongodb.com/master/core/transactions/#transacti...](https://docs.mongodb.com/master/core/transactions/#transactions-
and-replica-sets)

~~~
sbr464
For mongo, with existing features or the new transaction feature, is it
possible to access the ids that are generated during the update, to use on
subsequent updates, as references, without needing to return to the dB client
to process or build objects? Could be across collections also

------
ravenstine
This comes at a perfect time for me, because I've been working on an
application running on MongoDB and although I can get away without
transactions, they would help significantly.

