In the late 2000s and early 2010s, I remember seeing lots of hype around building distributed systems using message queues (e.g. Amazon SQS, RabbitMQ, ZeroMQ, etc.) A lot of companies had blog posts highlighting their use of message queues for asynchronous communication between nodes, and IIRC the official AWS design recommendations at the time pushed SQS pretty heavily.
Now, I almost never see engineering blog posts or HN posts highlighting use of message queues. I see occasional content related to Kafka, but nothing like the hype that message queues used to have.
What changed? Possible theories I'm aware of:
* Redis tackled most of the use-case, plus caching, so it no longer made sense to pay the operational cost of running a separate message broker. Kafka picked up the really high-scale applications.
* Databases (broadly defined) got a lot better at handling high scale, so system designers moved more of the "transient" application state into the main data stores.
* We collectively realize that message queues-based architectures don't work as well as we hoped, so we build most things in other ways now.
* The technology just got mature enough that it's not exciting to write about, but it's still really widely used.
If people have experience designing or implementing greenfield systems based on message queues, I'd be curious to hear about it. I'd also be interested in understanding any war stories or pain points people have had from using message queues in production systems.
That is, there was a big desire around that time period to "build it how the big successful companies built it." But since then, a lot of us have realized that complexity isn't necessary for 99% of companies. When you couple that with hardware and standard databases getting much better, there are just fewer and fewer companies who need all of these "scalability tricks".
My bar for "Is there a reason we can't just do this all in Postgres?" is much, much higher than it was a decade ago.