Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Without the intention to undermine anyone's work (and I truly appreciate the work the KeyDB guys did), I would not use this (in production) unless I see Jepsen test results [1] either by Kyle or someone supervised by Kyle. But, unfortunately, databases are complex enough and distributed systems even more, and I've lost many sleepless nights debugging obscure alternatives that claimed to be faster in edge cases. And the worst thing is that these projects get abandoned by the original authors after a few years, leaving me with (to quote a movie) "a big bag of odor".

I see databases as a programming languages - if they have a proven track record of frequent releases and responsive authors after 5-7 years, they are usable for wider adoption.

[1] https://jepsen.io/analyses




To counter what the other active business said, we tried using KeyDB for about 6 months and every fear you concern you stated came true. Numerous outages, obscure bugs, etc. Its not that the devs aren’t good, its just a complex problem and they went wide with a variety of enhancements. We changed client architecture to work better with tradition Redis. Combined with with recent Redis updates, its rock solid and back to being an after-thought rather than a pain point. Its only worth the high price if it solves problems without creating worse ones. I wish those guys luck but I wont try it again anytime soon.

* its been around 2 years since our last use as a paying customer. YMMV.


Actively using KeyDB at work. I would normally agree with you; and for a long time after I initially heard about the project, we hesitated to use it. But the complexity and headaches of managing a redis-cluster, uh, cluster (on a single machine — which is the idiomatic way to vertically scale Redis on a single multicore machine...) eventually pushed us over the edge into a proof-of-concept KeyDB deployment.

Conveniently, we don't have to worry about the consistency properties of KeyDB, because we're not using KeyDB distributed, and never plan to do so. We were only using redis-cluster (again, on a single machine) to take advantage of all cores on a multicore machine, and to avoid long-running commands head-of-line blocking all other requests. KeyDB replaces that complex setup with one that's just a single process — and one that doesn't require clients to understand sharding, or create any barriers to using multi-key commands.

When you think about it, a single process on a single machine is a pretty "reliable backplane" as far as CAP theory goes. Its MVCC may very well break under Jepsen — hasn't been tested — but it'd be pretty hard to trigger that break, given the atomicity of the process. Whereas even if redis-cluster has a perfect Jepsen score, in practice the fact that it operates as many nodes communicating over sockets — and the fact that Redis is canonically a memory store, rather than canonically durable — means that redis-cluster can get into data-losing situations in practice for all sorts of silly reasons, like one node on the box getting OOMed, restarting, and finding that it now doesn't have enough memory to reload its AOF file.


If I'm trying to just burn through VC money, and scalability is our #1 problem with our 142 users, would this be a good choice then?


How is scalability an issue with 142 users? I am genuinely curious.

Scaling starts being a real issue with 10,000+ users. Pretty straight forward to write a server on rust with a single machine capable of handling around 5,000 users, assuming stateless requests.

Maybe you were making a joke and I missed it.


It’s a joke. The parent makes a really solid point and it’s advice you should follow. A boring tech stack is the correct one.

It’s a pretty well documented fact SV startups tend to spend a lot of money on over engineering and making technology decisions based on what’s flash-in-pan popular rather than longevity. Any delta on stability you simply make up with sweat and don’t tell anybody about.

Most startups ideas could be fully implemented in cgi-bin gateway scripts in a few days, but that’s not ‘sexy’. Part of it is a mating dance to VCs: the more hip and bleeding edge you seem, the more competent you appear; despite the inverse is an actual reflection of reality. So my comment is in response to that running joke; in a way, a new unstable database that could disappear off the Internet within six months is a great technology to bass your entire start up on, given the above context.


Okay! Thanks for explaining to the benefit of all knuckleheads (like me).


yeah man it's a real subtle joke. 142 users surely means scalability is the #1 priority.


It is a Snapchat project though so it is likely at least somewhat decent compared to a random unfunded project. Maybe snap could pay Jensen to review it?


I think I'll stay far away from this thing anyway. Numerous show-stopper bug reports open and there hasn't been a substantial commit on the main branch in at least a few weeks, and possibly months. I'll be surprised if Snap is actually paying anybody to work on this.


[flagged]


That's so ridiculously tone deaf, self-righteous, and demeaning. Lots of talented devs work in thousands of companies.

Learn some humility and perspective.


fwiw, the company I work for has used keydb (multiple clusters) in production for years, under non-trivial load (e.g. millions of req per second). It did serve as a replacement to actual redis clusters, there were major improvements seen by the switch. I can't remember the actual gains as it was so long ago. I do remember it was an actual drop in replacement, as simple as replacing the redis binary and restarting the service. So if you can afford to, maybe try replacing a redis instance or two and see how it goes.


This should be the default mode or thinking for anything that stores your data. Databases need to be bullet proof. And that armor for me is successful Jepsen tests.


Given how much of a typical code base does depend on databases I think that’s a good perspective.


If you only used the versions of databases tested by Jepsen you would have problems worse than data loss, such as security vulnerabilities, because some tests are years old.

Then, has someone independently verified the Jepsen testing framework? https://github.com/jepsen-io/jepsen/


This doesn’t seem to be what was suggested? Using an up-to-date database that has had its distributed mechanisms tested - even if the test was a few versions back - is a lot better than something that uses completely untested bespoke algos.

As for verifying Jepsen, I’m not entirely sure what you mean? It’s a non-deterministic test suite and reports the infractions it finds; the infractions found are obviously correct to anyone in the industry that works on this stuff.

Passing a Jepsen test doesn’t prove your system is safe, and nobody involved with Jepsen has claimed that anywhere I’ve seen.


As a fork, it is essentially a new version of an already Jepsen-tested system. That meets your definition of "a lot better" than "untested bespoke algos".


I can’t tell if you are trolling me, but obviously “we made this single-threaded thing multithreaded” implies new algorithms that need new test runs


I am just evaluating your logic in the form of a reply, so you can see it at work. The same issue you describe happens often across multiple versions of a system.


The part of Redis that were Jepsen stressed, Redis-raft more recently, and Redis sentinel, for which the numerous flaws were pretty much summed at "as designed". No part of KeyDB has gone through a Jepsen style audit, all of which are untested bespoke algos.


Chrome is based on a fork of Webkit. Are you going to trust it because Safari passed a review before the fork?


Ask one level higher in the thread.


> Then, has someone independently verified the Jepsen testing framework?

I don't think it matters. Jepsen finds problems. Lots of them. It's not intended to find all the problems. But it puts the databases it tests through a real beating by exercising cases that can happen, but are perhaps unlikely (or unlikely until you've been running the thing in production for quite a while). Having an independent review does nothing, practically, to make the results of the tests better.

In fact, almost nothing gets a perfectly clean Jepsen report. Moreover, many of the problems that are found get fixed before the report goes out. The whole point is that you can see how bad the problems are and judge for yourself whether the people maintaining the project are playing fast and loose or thinking rigorously about their software. There simply isn't a "yeah this project is good to go" rubber stamp. Jepsen isn't Consumer Reports.


Have you ever been in a situation when you are writing tests where you get a test failure and realized that your test assertion was wrong?


Jepsen is more like a fuzzer than a unit test suite. It outputs "I did this and I got back this unexpected result". All of those outputs need to be analyzed by hand.

You don't audit a fuzzer to say "what if it runs the thing wrong". That's not the point of the fuzzer. The point is to do lots of weird stuff and check that the output of the system matches the expectation of what's produced. If the fuzzer outputs a result that's actually expected, then that's easily determined because you have to critically analyze what comes out of the tool in the first place.


It doesn't really matter if the Jensen test suite is faulty. If it reports something and it is (against all odds) not a valid bug. What's the problem. It does not claim to find all problems (and can't) so this is sufficient.


I certainly understand this sentiment. I am building a general-purpose data manager that has many relational database features. It can do a number of things better and much faster than conventional databases. It is currently in beta and available for free download.

But I would be shocked (and worried) if someone tried to use it as their primary database in production. It just doesn't have enough testing yet and is still missing some features.

Instead, I am promoting it as a tool to do tasks like data analysis and data cleaning. That way it gets a good workout without causing major problems if there is a bug.

https://didgets.com/




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: