This sort of testing gap is so hard to avoid. Any complex system has an inexhaustible number of potential feature interactions. It's very difficult at best to guess which combinations need to be tested, and often difficult to even write a test which exercises a particular combination.
What techniques do people use to ensure test coverage of complex cross-feature interactions like this? Two things I emphasize:
* Emphasize integration tests, and try to use multiple features in each test. This can help drive out interaction problems; the downside is that these tests are harder to write and (especially) maintain. It can also be difficult to exercise specific code paths.
* Write randomized tests, that exercise as many features as possible. The challenge here is identifying failures. "The program didn't assert or crash" is often a good start. A gold standard is to run a simple reference implementation alongside your real implementation, and compare the output... but that's not always possible.
You NEED tests to behave consistently. If you run the tests twice against the same code, it needs to return the same result. Otherwise, how will you know you have fixed the issue the tests caught?
Speaking from experience, soon after you introduce randomness to tests, you will start to get intermittent test failures. When that happens, people will start to ignore failing tests because, 'hey, this is probably just a random failure'
The "that's an intermittent failure, just re-run it and it usually passes" attitude is in my experience more likely due to just plain old poorly-written tests, usually with a time.sleep() or something similar that makes them unreliable.
For complex multithreaded code, it can become difficult to achieve consistent execution, but a fixed seed is at least a good start.
In hardware design, where reliability is key (making a chip costs multiple millions and 6 months time), there is only random testing (with seed for reproducibility), not "that bunch of case that the person testing thought of that day"
I'm not aware of anyone currently looking to ensure their test cases our generated using cryptographically random data.
The Feynman Algorithm: http://wiki.c2.com/?FeynmanAlgorithm
It's the uncomfortable truth we like to dance around. Just accept you can't plan for everything. Think carefully before making design decisions. Build in contingency plans in case things go wrong.
I am still using memcached, but consider Redis in future.
I am looking for a Redis-based high-performance message-queue that can be filled from Nodejs and consumed with PHP. Basically a high-performance message-queue that doesn't need dozens of servers to start with.
About message queues, I developed recently one called "Disque", but this is going to be totally ported to Redis, as a module, during the first two quarters of 2018. Otherwise there are many other solutions, many of them are based on Redis itself.
While Redis's protocol is simpler, it is a full memory database (with enableable persistence) plus optional queue/pubsub extensions. RabbitMQ is trying to be a full queue/pubsub system only, with memory and persistent options.
Its replication and durability features, if you want to add more servers, are also much longer-standing/more battle-tested (though far from perfect; I'm looking at you, "pause-minority" failures). Redis's persistence is quite good these days, though, so that's less of a competitive point.
RabbitMQ's setup is on par with Redis for simplicity. Client libs exist for PHP and Nodejs, and, while the protocol is more complex than Redis's, that usually just means "copy lines from $how_to_guide for startup/shutdown and then just publish/consume like you'd expect".
If I wanted a cache server that I occasionally needed to subscribe to, I'd use Redis no question. For a performance-oriented queue that needed either durability or throughput scalability, I'd start with Rabbit.
For something production-tested already, I've found RabbitMQ to be very easy to operate as a single server. You obviously don't get HA with a single server, but it's been a breeze to manage.
> Is Redis a one-man show? (plus contributors)
Well, yes, started by one man, and he continues to lead it, with many contributions from the community.
Update: I fixed the wrong URL.
Huh? I don't think it is, Redis is by antirez .. https://github.com/resque/resque/graphs/contributors
> A note about slavery: it's unfortunate that originally the master-slave terminology was picked for databases. When Redis was designed the existing terminology was used without much analysis of alternatives, however a SLAVEOF NO ONE command was added as a freedom message. Instead of changing the terminology, which would require breaking backward compatibility in the API and INFO output, we want to use this page to remind you that slavery is both a crime against humanity today and something that has been perpetuated throughout all human history.
Do not be offended on someone else's behalf.
Like you I had not come across this issue when I landed as a fresh immigrant in California in 1996 to work on replication, but it only took a few minutes for someone to explain it to me and we carried on saying "supplier/consumer" or "origin/destination" for the subsequent 2 decades without trouble..