Hacker News new | past | comments | ask | show | jobs | submit login
Riak 2.0.0 RC1 (basho.com)
100 points by cmeiklejohn on July 21, 2014 | hide | past | favorite | 26 comments

I have a hard time keeping up with all the NoSQL engines out there. What can I do about it? I could easily evaluate, say, CouchDB in a side-project, but that's a) not necessarily going to tell me a lot about its characteristics at scale, and b) I can't do this even for the big NoSQL engines out there.

Is there a good overview? I realize that this is sort of an oxymoron -- a high level overview is doomed to fail because you can't compress the complex characteristics down to a few bulletpoints. Antirez put it this way [0]: That said I think that picking the good database is something you can do only with a lot of work. Picking good technologies for your project is hard work, so there is to try one, and another and so forth, and even reconsidering after a few years (or months?) the state of the things again, given the evolution speed of the DB panorama in the recent years.

That was a comment to a pretty good overview[1], which despite being 3 years old is still useful. Apart from the purely technical characteristics, social characteristics such as rate of updates, adoption (and by whom?), openness are also interesting. You just "know" these things for the fields you're working in, but they're very hard to tell from outside and rarely discussed.

[0] https://news.ycombinator.com/item?id=2053594 [1] http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis

You can find lot of good articles about CAP characteristics of various NoSQL products here: http://aphyr.com/tags/Jepsen

Ooo, theres some support for CRDTs in here:


yay! these look really great!

That's great, but isn't the whole point of CRDTs that you don't actually need any special support as long as you get "access" to conflicting writes and can resolve them at the application level? It seems that they would be trivial to implement atop Riak-1.x, unless I'm missing something.

Don't get me wrong: Built-in support is fine and all, but it's hardly going to save the world.

EDIT: Just to add something other than skepticism, I found this video with Mark Shapiro quite interesting (and on-topic!)


Edited to say I'm a Basho employee and did a lot of the development work on CRDTs in Riak. Also, Hi!

You can use any CRDT implementation you want from the client. You can roll your own, or you can use riak_dt, or some other library. Indeed using a 3rd party CRDT library with Riak should be trivial, since conflicts are exposed to the user. Though there aren't many out there (yet?)

Writing your own Ad Hoc merge logic is somewhat more complex. Shapiro et al describe it as "complex, and error prone" in their CRDT paper. Google's F1 paper says:

"We also have a lot of experience with eventual consistency systems at Google. In all such systems, we find developers spend a significant fraction of their time building extremely complex and error-prone mechanisms to cope with eventual consistency"


"Designing applications to cope with concurrency anomalies in their data is very error-prone, time- consuming, and ultimately not worth the performance gains."

Building CRDTs into Riak is an attempt to mitigate this.

I don't think CRDTs built in to Riak will "save the world" but hopefully, for a portion of application developers, or potential developers, on Riak, built in CRDTs will make data modelling easier. It will, I hope, allow these developers to focus on their application, and not conflict resolution, when working with Riak.

Built in CRDTs are a step towards maturity for EC systems, I'm pretty excited about this work and think it opens up a lot of possibilities.

If you find making a general purpose, reusable library of state based CRDTs trivial, please send Basho a CV.

managing the # of actors, safely dealing with tombstones and reducing space complexity are just a few reasons why one might want CRDTs on the server side.

Not only that, why make every user reimplement the same resolution strategies? Or make Basho dev's reimplement them for each official client? You'll increase the chance for bugs or subtle semantic differences. Also, many users don't know or want to understand conflict resolution. They want a data structure that just "does the right thing". This is most easily achieved by baking it into the database itself.

Disclaimer: Former Basho developer with a lot of pride in the 2.0 release.

Can you add (or link to) something with more details?

"Managing the # of actors" doesn't make much sense to me without further context. I know that Erlang is actor-based, but that shouldn't matter if I'm e.g. a client of the system? Also, I'm not sure what "safely dealing with tombstones" would mean -- what additional safety is added by "server-side" CRTDs which cannot be achieved with client-side code (and how so?). (Etc.)

Full disclosure: I’m a current employee of Basho Technologies.

Other than some supporting work [1] [2] introduced into 2.0 to provide advanced causality tracking, you’re correct in assuming we could have introduced more [3] CRDTs as part of the Riak 1.x series. We could have also implemented all of the CRDTs we provide in the client as well, which is similar to what the SwiftCloud CRDT reference platform does.

There's a couple important things to note here, however:

* When talking about merging conflicting writes, we are specifically referring to state-based CRDTs, not operation-based, which is what we have implemented in Riak.

* Retrieving conflicting writes from the client, or siblings as we call them in Riak, requires bringing all of the siblings to the client, performing the merge operation, and shipping the updated state back. Given this, the number of siblings an object can have on disk, assuming all merge operations happen at the client, is potentially unbounded if you never ever read, and only ever write. When implemented on the server, we can ensure that we perform this merge operation during both the read and write cycle, keeping the sibling count down to one and reducing the amount of state we need to ship to the client.

* In addition, we use the coordinating node (really, a combination of virtual node and partition index) of the write as the "participant" or "actor" for the operation. This is not to be confused with actor model based languages. This allows us to have better control over actor growth; when dealing with clients all writing to CRDTs, every single particpant needs to have a unique actor id. Recall, that most of the CRDTs track actor counts, for instance the G-Counter which is structurally equivalent to a vector clock, although semantically different. This introduces a problem of garbage collection. Interval tree clocks, is one such solution for addressing the problem, but can not be used as the basis for some CRDTs. [4]

* Finally, there is work underway in making state-based CRDTs more efficient through "delta-CRDTs" [5], which allow for a more efficient optimistic and anti-entropy repair mechanism.

While the most notable resource for exploring CRDTs continues to be the comprehensive report by Shapiro, et al, [6] in practice many of the data structures outlined here have unbounded growth in garbage (specifically referring to items such as the OR set, which tracks an object for every operation performed). Therefore, we rely on some of the more optimized representations which don’t accumulate garbage. [7] In addition, the conflict-free, composable, replicated map structure, which is provided by Riak 2.0 was specifically invented by Basho, and it is the first of its kind. [8] It took many hours and iterations on QuickCheck models to ensure that, given somewhat arbitrary composition, that merge operations happened correctly. This is why there has been interest in exploring alternative ways of checking or building these models. [9]

By storing these CRDTs at the server-side, we also are able to provide a operation-based interface for interacting with these objects from all of our clients, and leave the complexity of implementing the CRDTs out of the client. This additionally allows for our search offering, Yokozuna, to be able to index these data types and provide query over their values.

[1] https://github.com/basho/riak_kv/pull/746

[2] https://github.com/basho/riak_core/pull/463

[3] http://basho.com/counters-in-riak-1-4/

[4] http://gsd.di.uminho.pt/members/cbm/ps/itc2008.pdf

[5] https://twitter.com/xmal/status/467331615535149059

[6] http://hal.inria.fr/inria-00555588

[7] http://arxiv.org/abs/1210.3368

[8] http://dl.acm.org/citation.cfm?id=2596633

[9] http://arxiv.org/abs/1406.4291

* Edited to fix citation formatting.

For what it's worth, I'm putting up a more permanent resource with information, links to papers, etc., on my blog:


Implementing CRDTs on riak 1.x was done years ago: https://github.com/mochi/statebox

That has almost nothing in common with CRDTs in Riak 2.0: http://docs.basho.com/riak/2.0.0/theory/concepts/crdts

So I guess the magic happens when your client fetches, say a map, and then performs only the operations on it required? So instead of just storing the whole object back after adding a field you instead send back and "add field" operation?

CRDTs in Riak are different from normal KV operations because you don't have to fetch CRDTs to modify them. Instead, you tell Riak which operations should be performed on them, such as adding a field to a map as you mention. Each data type had its own set of operations. Counters can be either incremented or decremented, you can add or remove elements to/from a set, and so on.

This looks really nice. The lack of automatic merging was really the last hurdle preventing me from wanting to use Riak.

One issue: I'm surprised that the merge algorithm seems to generally pick the last write as the winner. For example about registers, the docs say: "The most chronologically recent value wins, based on timestamps". But that's not always correct: For example, if client A reads version 10 and writes version 11, and then some client B has version 9 and writes version 12, you have a conflict where node A's version should win even though it's older, since it's history is more correct.

Or is the documentation just ambiguously worded? It says the algorithm is _weighted_ towards last-write-wins, but also that it takes history into consideration.

Only last-write-wins registers use last-write-wins to resolve conflicts. Maps and Sets and Flags use an Add-Wins/Observed-Remove semantic based on fine grained causality tracking methods borrowed from dotted version vectors, counters are vectors of actor->count pairs. None of these types use temporal time at all.

For something like a single string register inside a Map (which is the register you refer to) a simple LWW seemed adequate. Maybe in future we can add some more complex type here, maybe with causal history + timestamp arbitration (like Riak's allow_mult=false.)

So a register in a map will resolve correctly vis-a-vis my example above?

As a RoR developer primarily, Jose Valim's new language, Elixir, which looks ruby-esque but runs on the Erlang VM, has piqued my interest in learning more about Erlang.

Erlang has always been on the outskirts of my awareness as something that might be worth looking into, but I can't quite determine what it's best used for. I know it's behind a lot of telecom stuff, and that it powers WhatsApp, but when you're not dealing with massive scale where you need distributed computing and fault tolerance, are there still benefits?

Does Erlang make sense for a side-project web app? Or is it mostly for enterprise-level applications?

Very interesting.

Riak is one of the core components in the new Spine2 data messaging and handling hub being built for the NHS here in the UK.

The NHS is one of the largest producers and consumers of data in Europe and the new hub is being evolved using FOSS software and agile methods.

More details: http://www.ehi.co.uk/news/ehi/8534/spine2-built-in-house-on-...

I'm most excited about the embedded Solr search engine in this release (Yokozuna), always felt that architecturally search and data should sit in the same place.

is that actually solr or is it just lucene-based? I.e. do you use the same clients, schema, extension modules you'd use with normal solr?

It's Solr:

"Yokozuna comes pre-bundled with Solr 4.0.0 running in the Jetty container." (they now bundle 4.1.0).

Source: https://github.com/rzezeski/yokozuna/blob/v0.3.0/docs/RELEAS...

The Solr version in the Release Candidate is actually 4.7.0.

Congrats Basho!

Applications are open for YC Winter 2024

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact