Hacker Newsnew | comments | ask | jobs | submitlogin
willbmoss 709 days ago | link | parent

I'll agree they are not operational nightmares, but now that we're set up with Riak we can do things that I'm pretty sure your Postgres/MySQL/etc. setup cannot.

1. Add individual nodes to the cluster and have it automatically rebalance over the new nodes.

2. Have it function without intervention in the face of node failure and network partitions.

3. Query any node in the cluster for any data point (even if it doesn't have that data locally).

I'm sure there's other things I'm missing, but the point made by timdoug is the key one. We're at a scale now where it's worth trading up-front application code for reliability and stability down the line.



xb95 709 days ago | link

Yes, Riak gives you certain flexibilities that make certain things easier. If a node dies, you generally don't have to worry about anything. Stuff Just Works. (Well, in the Riak case, this is true until you realize that you have to do some dancing around the issue that it doesn't automatically re-replicate your data to other nodes and relies instead on read repair. This puts a certain pressure on node loss situations that I find is very similar to traditional RDBMS.)

But of your list, I have done all of these things in a MySQL system and for a comparable 1000 lines of code.

1. We implemented a system that tracks which servers are part of a "role" and the weights assigned to that relationship. When we put in a new server, it would start at a small weight until it warmed up and could fully join the pool. Misbehaving machines were set to weight=0 and received no traffic.

2. Node failure is easy given the above: set weight=0. This assumes a master/slave setup with many slaves. If you lose a master, it's slightly more complicated but you can do slave promotion easily enough: it's well enough understood. (And if you use a Doozer/Zookeeper central config/locking system, all of your users get notified of the change in milliseconds. It's very reliable.)

Network partitions are hard to deal with for most applications more so than most databases. It is worth noting that in Riak, if you partition off nodes, you might not have all data available. Sure, eventual consistency means that you can still write to the nodes and be assured that eventually the data will get through, but this is a very explicit tradeoff you make. "My data may be entirely unavailable for reading, but I can still write something to somewhere." IMO it's a rare application that can continue to run without being able to read real data from the database.

3. In a master/slave MySQL environment you would be reading from the slaves anyway unless your tolerance for data freshness is such that you cannot allow yourself to read slightly stale data. I.e., payment systems for banks or things that would fit better in a master/master environment. Since the slaves are all in sync, broadly speaking, you can read from any of them. (But you should use the weighted random as mentioned in the first point.)

...

Please also note that I am not trying to knock Riak. It's neat, it's cool, it does a great job. It's just a different system with a different set of priorities and tradeoffs that may or may not work in your particular application. :)

But to say that it can do things the others can't is incorrect. Riak requires you to have lots of organizational knowledge about siblings and conflict resolution in addition to the operational knowledge you need. A similar MySQL system requires you to have a different set of knowledge -- how to design schemas, how to handle replication, load balancing, etc.

Is one inherently better than the other? I don't think so. :)

-----

dolinsky 709 days ago | link

Overall I enjoyed reading the article and the specific tradeoffs you had to consider when comparing Riak to Mongo for your specific use cases. I'm curious if you had any problems w/r/t the 3 items you highlighted above when using Mongo, as none of those were mentioned in the linked article.

(these points are for MongoDB 2.0+)

1. Adding a new shard in Mongo will cause the data to be automatically rebalanced in the background. No application-level intervention required.

2. Node failure (primary in a RS) is handled without intervention by the rest of the nodes in the cluster. Network partitions can be handled depending on what the value set for 'w' is (in practice it's not an isolated problem with a specific solution).

3. Using mongos abstracts the need to know what node any data is on as each mongos keeps an updated hash of the shard keys. Queries will be sent directly to the node with the requested data.

-----

willbmoss 709 days ago | link

I agree with you in theory, in practice, my experience has been a bit different. Specifically,

1. Since Mongo has a database (or maybe collection now) level lock, doing rebalancing under a heavy write load is impossible.

3. Mongos creates one thread per connection. This means that if you've got to be very careful about the number of clients you start up at any given time (or in total).

-----

taligent 709 days ago | link

1. Why would you be rebalancing under heavy write load. Wouldn't it be scheduled for quieter periods ?

2. I was under the impression that almost all Mongo drivers had connection pools.

-----

bsg75 708 days ago | link

> Why would you be rebalancing under heavy write load. Wouldn't it be scheduled for quieter periods ?

I think the example case is when a node fails during a load period - unplanned vs planned.

Some of the public MongoDB stories come from shops that were forced to rebalance during peak loads because they failed to plan to do so in a maintenance window.

Of course waiting to scale out until _after_ it becomes a need will result in pain (of various degrees) no matter what the data storage platform.

-----




Lists | RSS | Bookmarklet | Guidelines | FAQ | DMCA | News News | Feature Requests | Bugs | Y Combinator | Apply | Library

Search: