I know this is a small gripe, but the name of the company (even though it makes sense) makes my skin crawl a little. The name does stick in my memory though, so I'm not sure if this is a good thing or a bad thing.
That case is different. Virgin started with music records and was already a huge brand by they time they launched the airline. They launched tons of products and all of them were called Virgin (mobile, cola etc).
May be a better question would be "Who would'a thunk you could name a record label 'Virgin' and get it off the ground?"
A bad name is better than wasting energy bikeshedding over available names. And in my opinion, for a product that solves messy problems "YourPerfectRainbowPony" isn't better. They're not selling to the marketing department.
But managers still need to get approval from executives to purchase it. And who wants to go into a budget meeting pitching for "CockroachDB by Cockroach Labs"? They desperately need a name in there that will be palatable with executives outside of the Silicon Valley venture bubble.
In my first job in AEC, my manager bought Zeos computers for our new CAD stations because he liked the way it sounded. The big glossy adds in all the computer magazines represented successful marketing. After listening to tech support tell my supervisor to "reset the BIOS" after two hours on the phone over a dodgy graphics card, I realized it would have been better if Zeos had spent some of that money on competent support personnel...it was another several hours before the hard disk settings and everything else that used to live in BIOS in those days was restored and the video card always was a little dodgy.
Anyway, my manager also liked to write his letters in Lotus123. Which is a round about way of pointing out that if management isn't going to evaluate technical decisions using technical criteria then there's no escaping the fact that pointy hair is as pointy hair does. If you're pitching the name, not the solution the name isn't the problem.
I love hearing about a database that decides to focus on replication and self-healing itself! It drives me nuts how most databases implement a data store and then leave all the complexities of sharding and replication as an exercise to the reader, who is busy trying to get other things done.
I've been looking for a database which does sharding and replication automatically and without throwing away any focus on consistency and transactions, so I figure I'm likely to use this in the future. I've struggled to try to find any others meeting these criteria.
There's a number of datastores that claim to shard and replicate automatically, with no worries for dev/ops/devops.
They've been lying. Never trust them.
Some datastores can actually do this, but performance per beefy server is less than you'd expect. You can use Riak but you have to write proper CRDTs. You can use zookeeper or etcd but those are for small amounts of configuration data, not for large amounts of customer data.
For all the datastores that claim to do everything automatically and have great performance, we can thank Aphyr for providing the proof that they don't live up to their promises, while we just suspect they don't.
I'd suggest trying to use a simpler model, and understand and accept its failure modes. Maybe your app has to go into read-only mode for a few hours if there's a server failure, etc.
>I'd suggest trying to use a simpler model, and understand and accept its failure modes. Maybe your app has to go into read-only mode for a few hours if there's a server failure, etc.
I'm fine with failure modes like that. I just want it to be automated. I don't want to come home from a trip and find that my database master has fallen over and the database slave has been patiently waiting for me to manually promote it for the last few days. I could probably rig up some cron jobs and shell scripts to automate this, but this is what I'm looking for something to do for me that's hopefully written by people smarter than me.
networking is the real performance bottleneck these days. cpu's, memory, ssd's are plenty fast and cheap. if your cluster isn't on at least 10gig switched ethernet, you're probably nowhere near the potential performance limit.
It's not about performance and bottlenecks, it's about CAP theorem: any system can provide only 2 of either Consistency, Availability, or Partition-tolerant.
Consistency: you want all you cluster to have the same data.
Availability: you want being able to lose one node or more in case of some issue.
Partition-tolerant: in case of net-split (think IRC), the splitted part of your cluster can work in isolation, then re-heal when the link is up again.
When someone sell you auto-healing and cluster reliability, they are selling you AP, which means that you lose the C, something we all takes for granted. Cassandra is one of those. Think of what you can't do when all your nodes can have different data of the same model.
Sorry for the useless explanation if you already knew that.
>When someone sell you auto-healing and cluster reliability, they are selling you AP, which means that you lose the C, something we all takes for granted. Cassandra is one of those. Think of what you can't do when all your nodes can have different data of the same model.
This is pretty hyperbolic. Netflix does perfectly fine with this model given that they run Cassandra at its lowest consistency level[1]. If they can reliably store watch histories, run recommendations, settings, and playlists on this model well I'm wondering what you have in mind when you say "think of what you can't do". Besides, its not like large AP systems are a new thing, have you ever overdrawn your account?
This is the most hand-wavy, unrigorous talk on a distributed database I've ever seen. You run a test 5 times in optimistic conditions and that gives you confidence that "you can trust it" to replicate your writes?
There are a multitude of failure cases in which it cannot replicate those writes. Ultimately your database has to be a decision based on the availability and consistency needs due to your use case, period. "Trust" should never come into the discussion at all, you should be well aware of what your tradeoffs mean in the worst case.
"Today, we’re launching CockroachDB for everyone. Use it. Build on it. Contribute to it!"
Does this mean it's more or less ready? The status in Github hasn't been updated in quite a while and lists it as Alpha with important parts like raft concensus still missing.
Can someone (preferably from the team) clarify the current situation?
PS.: CockroachDB is the only distributed DB that I would bet on going forward and being a solid base for a big distributed DB.
not quite ready yet, but the pace has picked up dramatically. We've begun work on the structured data layer and are whipping up a suite of extensive acceptance tests (load testing, performance metrics, ...) to iron out all of the performance issues/bugs that we don't want to be a part of the beta.
Raft consensus, btw, is already implemented. We'll update the README shortly to give a more concrete estimate of the situation.
The plan is to get out of alpha as soon as possible. I'll leave it to the founders to throw dates around but we're working hard on getting the technical core on solid ground, and all the auxiliary stuff (UI, deployment, ...) required for beta is getting a lot of attention. We'll have hands-on deployment demos soon and if you follow the project in the coming weeks you'll probably get a good idea of where things are going.
I've been following the development of this project from the beginning and it has been very interesting to see how they've productized it. IIRC, they all used to work at Square (and before that in a startup called viewfinder) and started it on a hackweek.
Opening line: Databases are the beating heart of every business in the world
Well that's not remotely true, is it? Not even close. Is it really a good idea to lead with something so obviously untrue? If you're trying to convince me of something (i.e. that this product is good), putting such a jarring, obvious falsehood right at the start is a bad idea. I'm wondering if they're deliberately spoofing their own seriousness, but I see nothing else in there to support that.
This line didn't bug me so much. Would you have been okay with something subtly milder? e.g.
Databases are at the beating heart of every business in the world
or even:
Data is the beating heart of every business in the world.
Depending on how generous a reader is feeling, either still not true or a massive hyperbolic exaggeration that stretches the word "database" far beyond its actual definition; I'd wonder (in fact, I actually am wondering this) if they were living in some kind of bubble.
> Cockroach is a distributed key:value datastore (SQL and structured data layers of cockroach have yet to be defined) - emphasis mine
I guess this is interesting, but distributed hard consistency pure K-V stores have been done before, Zookeeper, etcd, etc. It seems like the vast majority of the hard work is left to do. I don't want to get into naming arguments, but I wouldn't really call this a 'database' yet. It doesn't sound like you can do anything but a key lookup or range query currently, which is incredibly limiting for most real world applications.
I somewhat question the approach. e.g. why not figure out the hard part first? i.e. build the `SQL and data layers` on top of zookeeper or etcd then replace the backend to scale better? I would think this would get a lot more early adopters. As is, it's a very niche usage case that the alpha fills.
If you look at the documentation (eg., [1]), the design has been rather carefully thought out; it's just that they're implementing it from the bottom up.
According to their roadmap [2], they're aiming for KV functionality in 1.0 and aren't aiming for SQL until past version 2.0 (it's currently alpha).
Given the backgrounds of the technical people involved (including Google, as this project is inspired by Spanner), they should have a lot of experience with what they're trying to accomplish.
As for "done before", a core feature of Cockroach is true ACID transaction support, including snapshot isolation, something no distributed NoSQL database I know about supports. (ArangoDB does support transaction, but is mostly NoSQL in the sense of implementing a different query language than SQL.)
Exactly right. The hard part is building a key-value store with a powerful notion of transactions (not just compare-and-set or the like), and that's what's mostly done. Structured data is still work, but on the shoulders of giants.
> As for "done before", a core feature of Cockroach is true ACID transaction support, including snapshot isolation, something no distributed NoSQL database I know about supports.
Zookeeper has ACID transactions which I believe are linearizable (which trumps SI). The downside is the memory only working set, but given how cheap memory is, I'd still rather have a memory only Zookeeper with a rich query interface than a large storage data KV store with minimal query interface.
> ArangoDB does support transaction, but is mostly NoSQL in the sense of implementing a different query language than SQL
ZooKeeper is not a general-purpose database. I have heard of anyone using it as one, either.
> What is your definition of NoSQL?
I don't have one, and I think the term isn't terribly useful. But the whole idea of NoSQL started as an attempt to break free of the relational aspect of SQL, because things like joins, strict schemas, foreign keys, and normalization were perceived as getting in the way of distribution. ArangoDB supports joins (but not foreign keys, because it's schemaless) and an SQL-like query language, which makes it a lot closer to an SQL database than something like Redis or Cassandra.
Great storytelling, accompanied by a call to action at the end... but right there at the end a big bold button (or link) is missing, you need to figure out that the tech details are from the menu. Make it simpler for the reader!
I've been following this project for well over a year now. It's come a long way, has a long way to go still, but it's pretty exciting as an alternative to weak consistency stores available now.
Agreed. I'm in the middle of implementing one of the lesser DBs, and have all of the engineering ahead of me that requires. Unfortunately this doesn't look smart until 2.0, which is probably years away. Too long to wait for.
Except it's not. Strozzi used the term to refer to a relational database that didn't support SQL commands, the first modern usage of the term NoSQL was used to describe the slew of databases that copied Google's bigtable approach. (CouchDB et al)
I'm the author of a distributed database, competing with them. Overall I'm refreshingly impressed with their design document. Which I am sad to say that most other databases don't come any where near thinking these things through except as an after thought - so I am glad to see they are making it their priority.
With that said, they seem to be assuming that their clock skew (ε) has a fixed maximum boundary which is incredibly disconcerting to me as it implies that in certain (rare and anomalous) network partitions that they'll get data corruption and fail.
I can see how they, coming from a Spanner background with atomic clocks, might assume this. But this assumption requires that their database cluster is always connected, within some heartbeat interval (which they mention) such that they can trust there exists a maximum bounded ε skew.
So while it seems like a dumb question, I honestly must ask a very trivial question: how does CockroachDB handle basic network partitions? I assume they have a good answer to this, but it needs to be clarified in order to answer the more important issue of anomalous partitions, like split brain. This might rip the crockroach in half, quite literally, meaning that all other "guarantees" they give get thrown out the window like linearizability and global consistency.
Cockroach trusts the MaxOffset, and if your clocks don't live up to the promise, you might get some stale reads. By the way, Spanner breaks in the same way if their clock offset (via their TrueTime API) fails them. But Spanner has to wait out the MaxOffset on every commit, we don't - so we get away with having it high enough for off-the-shelf clock synchronization and save you the atomic clocks, at similar guarantees. That's a very good deal. If you happen to have atomic clocks around and you have strong guarantees on your uncertainty like Spanner does, you get linearizability at the same price.
Just from skimming the design doc, it appears that if your clock skew exceeds the maximum bound, it would break linearizability. I haven't parsed through all the details of their SSI implementation, but it appears that even with arbitrary skew they would still enforce serializable transactions. However, it appears that performance under high skew would drop off dramatically.
Without a global clock you basically have to give up uncontended snapshot reads and linearizability for cross-shard transactions. That would be a completely different system from spanner and cockroachdb.
I believe that if a node in the consensus group exceeds ε clock skew, it will be kicked out of the group.
As far as network partitions go, a consensus must exist for reads or writes. If you don't have 3 out of 5 working correctly and talking to each other, then you are down.
That's correct, it's a consistent system and so the majority needs to be involved on mutating writes. Reads typically can read from one designated copy of the replica directly (bypassing Raft).
Overuse of that comic is a pet-peeve of mine. That comic is about standards, which the entire point is for everyone to agree on. That doesn't apply to all products in general! The point of a database isn't for everyone to use the same product. It's to store data. It's not ironic in any way that there are multiple competing database products! It's only standards that have any irony in that fact!
Except i still don't know how i could build my own S3 or simpledb equivalent, even provided i have unlimited servers at my disposal. By looking at this project, it seems like they're trying to tacle the big problem, and that's a good thing.
That's just an image, though. It's not actually a page. And since you're on Firefox, the image is centered with a dark grey background, which imitates the common UI modal design pattern... but it's not.
Now why they would think to make that image a clickable link is beyond me...
Someone actually prefers lightboxes over direct links to images?? I've always hated them and assumed they were a nuisance to everyone. I guess one's beliefs are challenged every day...