I'm going to challenge you here (admittedly without full context, but to make a point). It sounds to me like the problem you're describing requires a database has ACID transactions which is not Cassandra. Therefore Cassandra is a poor choice and in my view that is poor engineering. This being said, I'd err on the side of agreeing with the developer who determined that they are not capable of writing code that guarantees ACID transactions in Casandra (that's a hard thing to do), and I'd really question the choice to use it to begin with.
Again without appropriate context, my bet here is that you guys are using Cassandra for other important features you won't get out of a typical RDBMS and as such you made a trade off to begin with and decided that Cassandra was "good enough".
Now the point I'm trying to illustrate (and I'm not just doing this to pick a fight, I promise), is that engineering is about trade offs and a big part of it is definitely related to likelihood of a problem occurring.
I think it also completely depends on the domain of the problem, the criticality of the process you're building and the outcome of a major failure of your assumptions.
So I'd just argue and say "good enough" is an entirely appropriate answer in many contexts and domains and it's important not to make a blanket assumption that it's wrong.
We aren't in disagreement, and I don't take your comment as a challenge, it was exactly something we went through with this feature. The point is, who gets to decide what "good enough" is? The difference between my "good enough" is a lot different than my coworker's "good enough", and I think that's what separates levels of maturity as a developer.
In fact Cassandra wasn't my first choice, but a strongly consistent database wasn't available to us. As I mentioned in another comment, making the very best decision you can given the constraints of your system is what one should strive for. Not stopping at "good enough" because of (poor) intuition that error conditions "probably won't happen".
We decided to go with LWT and eat the latency costs as a trade off to "stronger" consistency, realizing that Cassandra doesn't offer the same strong consistency as an ACID database. Not perfect, but it fit within our SLA, decreased the probability of encountering duplicate values, and if there was an error, it was easier to detect and the user could be directed to try again, vs having a completely silent error condition that would cause a small percentage of our users tremendous amounts of trouble.
That was my thought, too. If you need ACID transactions, use an ACID database. There are rather a lot of them. And then mirror the data out to Cassandra after the fact, if you need it there. Ironically, Cassandra may not be the source of truth!
Way back in the stone age, MySQL did not yet have ACID transactions. They got them about the same time they stopped bragging about how much faster they were than Oracle, but I digress. Anyway, I had to write a bunch of transactional code around it. Drove the dba and me nuts. We begged for Sybase (we both knew it well), but the startup CTO was an open source purist and hated his first contact with the Sybase sales machine.
Again without appropriate context, my bet here is that you guys are using Cassandra for other important features you won't get out of a typical RDBMS and as such you made a trade off to begin with and decided that Cassandra was "good enough".
Now the point I'm trying to illustrate (and I'm not just doing this to pick a fight, I promise), is that engineering is about trade offs and a big part of it is definitely related to likelihood of a problem occurring.
I think it also completely depends on the domain of the problem, the criticality of the process you're building and the outcome of a major failure of your assumptions.
So I'd just argue and say "good enough" is an entirely appropriate answer in many contexts and domains and it's important not to make a blanket assumption that it's wrong.