no thanks. i'd much rather trust postgres to handle things like foreign keys, unique constraints, transactions, and other goodies than the application, which is is what ends up happening when you try to do this in a key:value store.
precisely. I feel like I'm becoming a grumpy old man, because it seems to me, at face value, the onslaught of 'developers' have brought with them, generally, an aversion to tried and true things...or, everything other than the programming language they've learned. Few of the people I encounter know anything about SQL, much less more advanced features or design. Same with Linux systems. Why care, just throw it in a container and kill it if it acts up. It's a real shame, because I think if more folks took the time to understand things, in this case relational databases, they'd realize how absolutely powerful they are, and why this is a terrible idea.
Don't call yourself a grumpy old man. Call yourself a 'programmer who is capable of thinking more than one month ahead', which is shorter than the release of the 'MVP' anyway so you are saving everybody both time and a maintenance hellscape where very typo is a new variable name.
I feel this is more about IT departments lowering their bar to satisfy their staffing needs fueled by lowering the bar in the previous round. It's a vicious cycle and a well known one, even in other fields.
Because many "engineers" do not have a solid understanding of the field, companies have been exploiting that by using and promoting solutions that would have been otherwise dismissed with very annoyed faces. Even docker would have been dismissed, as "cool but it has safety issues, so go home and redo your implementation".
The docker comparison is apt. Red Hat did take home dockers work and did their homework with Podman, as a result, you basically get a complete docker compatible workflow, but now your containers don't have to run as root and you don't need a daemon constantly running on the system. The organization who really knows Linux inside and out made the better implementation.
The real problem is there are valid reasons for solution X or Y, KV DBs exist for a valid reason and so do relation DBs.
But then there the special people out there, they spend hours blogging about their personal opinion on how the giant grey blob KV DB can solve world hunger.
Because it's all they know, they are afraid of change, they are afraid of ~ gasp ~, having to learn outside the box they dug themselves into, or they think they are these amazing super programmers that have the answer to everything.
And then theres that write all this fluff for their resumes without actually knowing what they are doing but the senior engineer made them use it.
> Often times, we end up spending countless hours setting up databases, schemas, and dealing with ORM systems, where we could simply treat data as keys and values.
There is always a schema, whether you want your database to help you with it or not is your choice.
SQL is not that hard. And the very slight amount of analytical rigor needed to atomize data, normalize data, and think about relationships between data, makes you smarter, gives you many more options for slicing & dicing the data, and if you have a a proper RDMS, it runs faster'n shit. Parsing/serializing/deserializing JSON blobs, not so much.
Christ - what is with these Benjamin Button development fads? Dealing with the NoSQL silliness of the last few years was bad enough, but just wait until the next generation of devs discovers storing shit in flat files on the local filesystem...
Ok, we all know how a key-value store is not competitive with the current SQL db's. But lets remember all of this NOSQL trend started with the SQL databases being a poor fit for a distributed environment back in the day.
By the way, what i would actually love to have, is a LINQ like API abstracted over a key-value store, and access to a SQL AST also in the language.. For me i think this is the desired middle ground, where you can scale and compose data over key-value, but also not to be stuck with SQL data mappings inside your codebase.
I mean, a relational algebra API that deal with any KV store, be it LINQ or the backend of something like Presto (or the DataFrame of Spark).
If you need something like a SQL interface somewhere, you could build it by stitching the SQL AST with the engine you use directly in the language.
uch... what a headache when a user needs to change their email. The mania for "schema-less" KV stores is so puzzling to me. It just takes all the stuff that used to be handled natively and seamlessly by the DB engine and forces you to reimplement it in code, only with more bugs, less efficiency, and less flexibility. I fought this battle on my current team and lost, and we have been paying a steep price ever since.
So, I heard that Amazon developed Dynamo because they realized that most of their table lookups were either:
* looking up an item by a key
* selecting a range of items
I would never choose a key-value DB for "scalability" reasons. It's very unlikely that I will be hitting the kind of scale where it would actually make a difference.
Where I have found it useful in my particular line of work is in a fairly narrow context: working in a microservice architecture where you create a single service between your system and a 3rd party system, and it's database is essentially a mapping of your own system's IDs to the ID's of the 3rd party.
Honestly, that plus one or two secondary indexes is all I've really needed. I've done that for 3 different integrations over the course of 2 years now, and it's worked out quite well so far. The fact that we have a strict policy of not sharing db access also means that I'm not to worried about other services trying to query the data in weird ways.
There are of course more fancy patterns for structuring your data in a key-value store, but at that point the main benefit of kv stores starts to diminish once you go down that path.
alternate title: "Ditch your decades-old, stable technology that everybody in the industry knows for our product that makes absolutely no case for use beyond a basic user query."
I agree that dynamism can be helpful for prototyping and rush-job applications, but I'd rather "dynamic relational" be implemented for such: https://stackoverflow.com/questions/66385/dynamic-database-s...
One can gradually add constraints to give it RDBMS-like strictness as the project matures. And you can use SQL on it, with minor modifications to assist in comparing dynamic (implied) types.