Hacker Newsnew | comments | show | ask | jobs | submit login

Can't reply any deeper, figure this is relevant...

Databases are, in my experience and those of my peers, the aspect of a system most likely to crash and burn. On the surface, it looks like regular old bugs--every project has them. But there's a reason databases are so hard: they're the perfect storm of unreliable systems, bound together with complex and unpredictable code to try to present a coherent abstraction. Specifically:

1. DB's need to do disk IO for persistence, which pits them against one of the slowest, most-likely-to-fail components of modern hardware.

2. They must maintain massive, complex caches: one of the hardest problems in computer science in general.

3. They must be highly concurrent, both for performance on multicore architectures and to service multiple clients.

4. Query optimization is a dark art involving multiple algorithms and frankly scary heuristics; the supporting data structures can be quite complex in their own regard.

5. They need to send results over the network, an even higher latency and less reliable system.

6. They must present some order of events: whether ACID-transactional, partially ordered with vclocks, etc.

Almost all these problems interact with each other: a logically optimal query plan can end up fighting the disk or network; concurrency makes data structures harder, networks destroy assumptions about event ordering, caches can break the operating system's assumptions, etc.

Moreover, the logical role of a database puts it in a position to destroy other services: when it fails, shit hits the fan, everywhere. Recovery is hard: you have to replay logs, spool up caches, hand off data, resolve conflicts. Yes, even in mostly-CP systems like MSSQL. All of these operations are places where queues can overflow, networks can fail, disks can thrash, etc.

Furthermore, the quest for reliability can involve distributing databases over multiple hosts or geographically disparate regions. This distribution causes huge performance and concurrency penalties: we're limited by the speed of light for starters, and furthermore by the CAP theorem. These limits will not be broken barring a revolution in physics; they can only be worked around.

The only realistic approaches (speaking broadly) are CP (most traditional DBs) and AP (under research). CP is well-explored, AP less so. I expect the AP space to evolve considerably over the next ten years.




First, I'm going to quibble with #5. The networks databases typically must communicate over are typically much lower latency and are at the very least far less subject to spontaneous hardware failure (human error is another matter that I'll concede is certainly debatable as that has a lot to do with the complexity of your storage and network systems...).

From elsewhere (lost the link) you wrote:

> My point is that every database comes with terrifying failure modes. There is no magic bullet yet. You have to pick the system which fits your data and application.

I'm totally with you in this part. "State" is the magic from whence most (all?) software bugs seem to stem. This has lead to emergent design time behaviour such that as few of the components as possible manage the authoritative logical state of our systems. It lead to it being desirable for those few remaining stateful components to be as operationally resilient in the face of adversity as conceivable. These forces have conspired to ensure herculean efforts to make such systems resilient and perfect, almost as much as they have conspired to ensure those components present the most impressive catastrophic failure scenarios. ;-)

While on one hand I completely agree that one needs to select the system which best fits your data and application, the maturity of more established solutions ensures they've been shaped by a far more robust set of different contexts. They've crossed off their list far more "fixing corner case X is now our top priority" moments than those who have come since. They really can better address a much broader range of the needs of at least applications that have been developed to date. By comparison most "NoSQL" stores have advantages in a fairly narrow set of contexts. Particularly if you are a startup and really don't know what your needs will be tomorrow, the odds favour going with as high a quality RDBMs as you can get your hands on.

I really do think far too few startups consider the reality that the thought process should probably be, "Are the core technical problems that stem from our long term product goals fundamentally not well suited to solutions available with RDBMS's?", and only asking about alternatives if the answer is no.

Another way to look at is is, from 1-6, only #4 could be argued as not being applicable to a file server (I'd throw in SAN too, but those are so often lumped in with RDBM's, so let's skip that one)? And really, #4 is both simpler and harder for filesystems: they have a far more trivial interface to contend with... but consequently have a far more difficult time determining/anticipating query access patterns for selecting optimal data layout, data structures and algorithms to maximize IO efficiency.

Why don't we worry as much about choosing a filesystem to suit our app as we do a database?

-----


First, I'm going to quibble with #5. The networks databases typically must communicate over are typically much lower latency and are at the very least far less subject to spontaneous hardware failure (human error is another matter that I'll concede is certainly debatable as that has a lot to do with the complexity of your storage and network systems...).

I really wish this were true man! For many shops it is, but under load, even local networks will break down in strange ways. All it takes is dropping the right syn packet and your connect latency can shoot up to 3 seconds. It's also common to have a database system with 0.1+ light-second distances between nodes, which places a pretty firm lower bound on synchronization time.

I'm totally with you in this part. "State" is the magic from whence most (all?) software bugs seem to stem.

Absolutely agree; databases are simply the most complicated state monads we use. I actually count filesystems as databases: they're largely opaque key-value stores. Node-networked filesystems like coda, nfs, afs, polyserve, whatever that microsoft FS was... those are exactly databases, just optimized for a different use case and with better access to the kernel. SANS are a little less tricky, at least in terms of synchronization, but subject to similar constraints.

While on one hand I completely agree that one needs to select the system which best fits your data and application, the maturity of more established solutions ensures they've been shaped by a far more robust set of different contexts. They've crossed off their list far more "fixing corner case X is now our top priority" moments than those who have come since. They really can better address a much broader range of the needs of at least applications that have been developed to date. By comparison most "NoSQL" stores have advantages in a fairly narrow set of contexts.

You're totally right; traditional RDBMSs have benefited from exhaustive research and practical refinement. Strangely, most other old database technologies have languished. Object and graph stores seem to have been left behind, except for a few companies with specific requirements (linkedin, twitter, google, FB, amazon, etc). AP systems in general are also poorly understood, even in theory: the ongoing research into CRDTs being a prime example.

So to some extent the relatively new codebase of, say, Riak contributes to its horrible failures (endless handoff stalls, ring corruption, node crashes, etc). Those will improve with time; I've watched Basho address dozens of issues in Riak and it's improved tremendously over the past three years. The fundamental dynamo+vclock model is well understood; no major logical concerns there.

Now... the AP space seems pretty wide open; I would not at all be surprised to see, say, distributed eventually-consistent CRDT-based, partial-schema relational, kv, or graph DBs arise in the next ten years. Lots of exciting research going on. :)

Why don't we worry as much about choosing a filesystem to suit our app as we do a database?

If I had to guess, it's because non-esoteric local filesystems offer pretty straightforward capabilities; no relational algebra, no eventual consistency, etc. If you have access to ZFS it's a no-brainer. If you don't, there are decent heuristics for selecting ext4 vs XFS vs JFS et al, and the options to configure them (along with kernel tweaks). The failure modes for filesystems are also simpler.

That said, I agree that more people should select their FS carefully. :)

-----


This doesn't really answer any of the questions I asked you but from what you wrote here you have to agree that going with a tried-and-true RDBMS that's been around for decades is a much better prospect than choosing one of the new, unproven NoSQL products.

-----


Sorry, was trying to address underlying concerns rather than specific software.

you have to agree that going with a tried-and-true RDBMS that's been around for decades is a much better prospect than choosing one of the new, unproven NoSQL products.

I disagree. Like I said, CP vs AP are two very different strategies for addressing mutable shared state, and there are no AP RDMBSs that I'm aware of. If your system dynamics are fundamentally AP, no number of Oracle Exadata racks will do the job. If your dynamics are fundamentally CP, Riak, SimpleDB, and Cassandra will make your life a living hell. Few applications are all-AP or all-CP, which is why most teams I know use a mixture of databases.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: