So, perhaps we should buy a order/demand management system rather than build it. This too has its tradeoffs. SAP is in business for this very reason. There is a massive complexity and capital trade off when buying a system, it is never “buy”, it is buy, install, configure and integrate, and don’t customize so much you can’t upgrade.... I’ve seen many cases where that game is 10x the cost of “build with a small team using open source and/or simple cloud services”.
So we eventually solve the problems somewhat messily with newer architectures and eventually hone our learnings into patterns that Martin Fowler inevitably publishes. These eventually become popular techniques with every new generation of developers and they become fads ... and may be misapplied.
I think it’s the way it’s always been. Today CQRS/ES or microservices are the fad. 10 years ago it was web services and ESBs and 15 years ago it was transactions and distributed objects and 20 years ago it was CGI scripts and Perl . All of these solved lots of issues and caused lots of issues. The question is whether they solved more problems than they caused. The record varies.
Personally I have seen CQRS/ES in lots of places lately for legacy modernization. It’s been around under various guises for a long time (10 years at least) - cache your legacy data, expose it with an API (you absolutely can do CQRS with REST btw, the commands themselves are resources), force the use of messages/events to update the legacy only and use that to keep the cache coherent, and you can strangle the legacy gradually without a ton of huge changes in theory. Eric Evans indirectly talks about this as the 4th strategy in his great talk on “four ways to use domain driven design with a legacy”. One absolutely should consider the other three ways first (a bubble context / repository, anti corruption layer, etc)
The other context I saw CQRS without ES is when you have a rich data display and your simple three tier JS to Java to ORM is starting to creak. I had a project that required a near real-time spreadsheet like view with security role filtering of data, custom column sorting, transforms, and consistent query across 10 tables that also allowed full text autocomplete search across the grid. Materialized views / denormalization wouldn’t work too well in this case because the updates come in to the various tables from business events and other team members and the grid needed to be up to the second valid and quickly refreshed. The queries on the SQL database with Hibernate HQL wound up being massive 10 way outer joins , a bunch of nested scalar subqueries, lots of dynamic WHERE clauses and GROUP/HAVING clauses plus full text indexing (this all ran in under 200ms mind you, so not terrible performance wise :). The problem was these were unwieldy to maintain as new data and indexing requirements came up and required deep understanding of SQL voodoo and performance tuning. This was not a big project either - high value (several hundred million ), high impact (multi billion dollar revenue stream), but a small team (8 people) and modest budget ($1m). Migrating our ORM to write commands on SQL and using Solr for query was the right move for the health of the system and long term performance. Btw, this was a project that was going to go on SAP for the low price of $30m and shipping in 9 months vs shipping in 3 months and evolving it after...
My point is that...
- new architects always want to design-by-resume to some degree, and tackle the big bad hairy stuff with new techniques. this is why we see people throwing out proven components for terrible replacements on day 1 and then going back to the old 3 years later... looking at you , Mongo and Postgres! But sometimes the new technique is actually better (Looking at you, Kafka)
- Most people only learn through failure, they don’t read the warning labels ... Applying patterns tastelessly is a rite of passage
- even in smaller projects these patterns have applicability
- I’ve rarely bought software that I loved, they’re all 10x more complicated than they need to be and didn’t do necessarily a better job ... that said there’s also no guarantee that you and your team won’t build something that also is 10x more complicated than it needs to be... “all regrets are decisions made in the first day” as they say... it really depends on your luck and circumstances.
Perhaps one of the lessons of architecture that is missing is to teach people how to evaluate tradeoffs, or in other words, “taste”. I don’t think we’ve ever really had good taste as an industry. Buzzword bingo has always ruled, with some exceptions. One of the things I loved about Roy Fielding’s REST thesis was a way to analyze capabilities , constraints and tradeoffs on an architecture structure that consisted of components, connectors and data elements. That was the most important take away of that work IMO, we seem to have never learned how to critically look at these in favour of buzzword bandwagon jumps.
Re your comment on "taste".. Anders Hejlsberg is on video saying his choices in compiler and IDE design come down to taste.
I think he's got good taste, so I followed his platforms from Turbo C++, to Delphi and C++ Builder, then onto .NET and now Typescript (Yes, I skipped J++). When you actually understand how he thinks and how the tools are meant to be used, they are incredible. He balances the tradeoffs to achieve fast performance in all aspects from design to compilation to great runtime performance, with actual simplicity and ease of use. I learnt so much reading the VCL source code..
Anyway I almost agree when you say >> I don’t think we’ve ever really had good taste as an industry.
But Anders' work shines through as a guiding light, when people really get it.
Interesting. What do you mean by commands on SQL?
It’s been 5 years or so and I don’t think this has changed much with either Elastic or Solr product-wise