Technologies like immutable log-structured storage do have advantages from the business perspective, such as lower defect rate and speed of development. Few people use technologies just to play with them when something serious is on stake (or so I hope).
OTOH not understanding business requirements before picking technologies is indeed a problem. You have to ask a lot of questions to extract the relevant info from the business side before you can make a choice. This is especially important for tacit, "obvious" assumptions that business people honestly forget to mention. (Like, water is wet, daytime sky is blue, and a kilobyte is 1024 bytes; these are facts that everybody surely knows, right?)
Another thing is that requirements constantly but slowly change. A key assumption of the architecture that was correct a year ago may be challenged due to business considerations, legislation, etc. You have to make your architecture flexible enough to allow for unexpected shifts, but this does come at a price of its being less simple, less elegant, and less error-resistant. Yes, it contradicts other business interests (less downtime, faster features rollout, smaller IT team headcount, etc). You have to strike a balance, and ideally be able to shift the balance without a major rewrite if need be.
So the problem is not in the architecture chosen, to my mind, but in the (wrong) process of choice.
>>> Few people use technologies just to play with them when something serious is on stake (or so I hope).
I wish that were true. For example, I bet the actual number is < 10% of people who use Kafka need it. There's nothing Kafka can do that SQL can't, it's a highly-specialized tool that drops 90% of database features for a performance gain. I suspect very few companies need Kafka. [Aside-- honestly, Kafka should exist as a storage engine within SQL and nothing else]
It's the whole "Mongodb is webscale" debate again.
Messaging has been a tool of enterprise architecture for 10+ years and is not so much a replacement for SQL as a way to ship information between different service backends through a small and well-defined interface, rather than the enormous coupling surface area of sharing a DB. I would expect most Kafka messages to both originate and terminate in SQL databases.
The place I often see this is in localization. If I had a $CURRENCY for every time I've seen people try to build their own translation layer rather than relying on whatever's built in to the framework they happen to be in... well, I'd probably have enough for a decent dinner, but that's still far too much.
OTOH not understanding business requirements before picking technologies is indeed a problem. You have to ask a lot of questions to extract the relevant info from the business side before you can make a choice. This is especially important for tacit, "obvious" assumptions that business people honestly forget to mention. (Like, water is wet, daytime sky is blue, and a kilobyte is 1024 bytes; these are facts that everybody surely knows, right?)
Another thing is that requirements constantly but slowly change. A key assumption of the architecture that was correct a year ago may be challenged due to business considerations, legislation, etc. You have to make your architecture flexible enough to allow for unexpected shifts, but this does come at a price of its being less simple, less elegant, and less error-resistant. Yes, it contradicts other business interests (less downtime, faster features rollout, smaller IT team headcount, etc). You have to strike a balance, and ideally be able to shift the balance without a major rewrite if need be.
So the problem is not in the architecture chosen, to my mind, but in the (wrong) process of choice.