

Simple CouchDB multi-master clustering via Nginx - va_coder
http://ephemera.karmi.cz/post/247255194/simple-couchdb-multi-master-clustering-via-nginx

======
swannodette
I totally agree with the sentiment of the post - using CouchDB has simply been
unbelievable amounts of fun (and certain types of applications would be
painful if not impossible to design without it).

The existence of projects like couchdb-python and couchdb-lucene are just
icing on a very tasty cake.

~~~
kolosy
seconded. it's interesting how it forces you to build for scale right away.

------
z8000
I'm not too familiar with multi master replication. What happens when a
request handled by server A is followed by a subsequent request that is routed
to server B before server A has replicated the changes from the first request?
The client would expect a certain state on the "server" given the first
request but it certainly seems possible to invalidate that.

~~~
jdminhbg
If you attempt to update a record, you send a uuid to server B representing
the document you're attempting to update; if sever B doesn't have that
document yet (either because it hasn't been created on server B or because it
is not yet updated to the latest changes on server B), that change will be
rejected.

If you are just expecting the document to be a member of a list (say, you put
an order on server A, then look for all orders belonging to a customer on
server B), you're not guaranteed to see it. This is 'eventual consistency' --
in practice, you want to be sure that you can live with this as a tradeoff.

~~~
z8000
Thank you.

------
mark_l_watson
Great writeup. I use CouchDB behind nginx anyway for simple authentication so
I'll have to try this. The only problem is "continuous replication currently
does not survive server restart" - that is not so good.

~~~
mark_l_watson
OK, apparently this is not such a problem; from the book: "At the time of
writing, CouchDB doesn’t remember continuous replications over a server
restart. For the time being, you are required to trigger them again, when you
restart CouchDB."

So, put something in /etc/rc.local for each server to start the replication
again.

