I only read the first third of the DDD book, the conceptual parts, which I thought were excellent, especially concepts like ubiquitous vocabulary and domain boundaries, which put a name to practices we’d already discovered.
But the underlying problem is that engineers are learning the domain as they write the code. This leads them to make invalid assumptions, which can go very deep, and make it extremely difficult to factor out later. If you build a feature based on bad assumptions, it can be really hard to remove or refactor it years later.
As it happens, I have written about 10 billing platforms in my life. If I was to build a new one today, I know exactly how to do it, and it will be super fast, efficient, and cover all the edge cases you can think of (please hire me :). My code structure would look nothing remotely like that written by someone doing this for the first time. They will make a bunch of fundamental mistakes, and most of those mistakes will be conceptual, because how can they know?. The problems range from accounting principles, to the nature of time, to user behaviour, and much more.
And the problem is that they will make the mistakes not because they don’t know how to write software, but because they haven’t spent years coming to understand the nuances and constraints of the billing domain.
No amount of software engineering boilerplate can fix that. In fact, the problem is that software engineering is largely discussed as a homogenous activity, but actually there are a million domains, each of them with different constraints, and because there are a limited number of specialists, we are constantly reinventing the wheel. To use my example, anyone can build a billing system because it seems obvious. But mostly those systems are going to be pretty flakey when they encounter the real world. And this principle applies to just about any domain.
I thought the DDD hype has died down a bit, but I guess not.
Eric Evans (inventor of DDD) has said in recent years that unfortunately once a team is big enough, all the invisible conceptual boundaries between domains blur and disappear. People do NOT have the discipline to do DDD correctly.
But do you know what helps? Having physical boundaries you can't cross so easily. Microservices. Or as I call them... just "services". Eric Evans has said that people using microservices may represent the most successful application of DDD's principles, not because people using them were fans or even aware of DDD, but microservices enforce the patterns DDD Evans was describing.
> But do you know what helps? Having physical boundaries you can't cross so easily. Microservices. Or as I call them... just "services". Eric Evans has said that people using microservices may represent the most successful application of DDD's principles, not because people using them were fans or even aware of DDD, but microservices enforce the patterns DDD Evans was describing.
Sir, your assembly code is so hard to read with all them GOTOs. You should do structured programming, use constructs such as loops to make it easier to follow.
Man, your code is so long written like that, you should break it into functions to make it easier to reason about.
Dude, you have so many functions it's impossible to make heads or tails of the logic, you should break it into classes to be able to abstract away all of this detail.
Bro, you have so many classes I don't even know where to start. You should introduce modules so that it becomes possible to isolate different parts of the design.
Dawg, you have so many modules, you ought to break them into small different runtimes communicating via a rest protocol to isolate different parts of the code.
Choom, you have so many microservices, you ought to break them into different microclusters to separate the different parts of the API.
X'man, you have so many microclusters, you really ought to break them into different miniclouds to spearate the functionality and make it easier to follow.
𐀘𐀐, you have so many miniclouds it's impossible to deal with this code, you really ought to separate it into functional minicloud clusters.
Exactly. We need RECURSIVE DIVISION of compute/state entities.
Our main mistake is that we invent a level of division, and never make it recursive, i.e. a class is badly designed to be an entire module, it can't contain classes (inner Java classes are not what I'm talking about), nor can classes cross machine boundaries easily in many languages and so on.
We need a unit of division that works across machines just as well as it works across files, or across statements in the same file, up from data centers to CPU instructions.
Our languages and platforms are full of the SAME THING but SLIGHTLY DIFFERENT, under DIFFERENT NAMES. You listed many of them. Machine opcode, expression, function, class, module, service etc.
Modules are recursive. Libraries are recursive. Hell, functions and classes are recursive too. I don't think we have any non-recursive code abstraction tool.
Some of the problem is that people need different capabilities from different abstraction layers. The rest of the problem is that there are a lot of incompetent people creating software.
If you start listing what different capabilities you need on each layer, you'll notice it can all be done, as a very lean abstraction, in one layer. I'm working on creating a platform that's doing just that, BTW.
I'll cite Erlang as an example that comes somewhat close to what I mean. An Erlang process is a function, a class, a module, and a remote service, all at once. It can be done. We just never stopped to truly think about it and try.
> We just never stopped to truly think about it and try.
Oh, you are right about this. But a lot of people stopped to think about it. It always seemed to fail due to market failures, like most other innovations in development.
Yes, the network effects of pre-existing platforms are big. But we should keep trying I think. I think such a platform should also be inclusive enough hat it can act as a glue layer for and INSIDE existing platforms, but have richer semantics than say JSON, Protobuff or similar protocols offer.
It's as I've always said, every function ("unit") should be responsible for a single thing, and it must be developed by an independent company with no knowledge of the other companies involved. Otherwise you just can't do software engineering right.
For that to work, there has to be, somewhere, a clear understanding of how the units all work together to achieve the higher-level goals without violating the higher-level constraints which, in practice, increase in number and scope the higher up the pyramid of abstraction you go. Pace Robert Martin, divide-and-conquer cannot be used to reduce systems-level thinking to doing just one thing, even in an abstract sense.
> For that to work, there has to be, somewhere, a clear understanding of how the units all work together to achieve the higher-level goals without violating the higher-level constraints which, in practice, increase in number and scope the higher up the pyramid of abstraction you go.
Oh, that's an implementation detail. The market will sort it out.
I believe that never works in practice. The thing that binds units together is also code. Only while designing / implementing that code, you encounter the flaws in the interfaces of those units, and conclude that half of the units need to be replaced or their specifications need change.
But this approach, Microcompanies as it is called (units developed by independent companies), is also partly no-code, in that the specifications and different functional units are glued together using lawyers.
I'm not decided on that. On the one hand, sure, blockchain should be involved! But the single thing per company requirement would mean that the company could do nothing else but a blockchain with nothing on it.
But it does sound enticingly pure, a conglomerate of blockchain companies without further function.
I’ve called it Model-T development[1]. The Model-T brought standardization to building cars where companies only built to a very specific spec. It didn’t matter that some companies would be better than others, or cheaper/expensive. The point was choice, competition, and interchangeable parts. Granted, I don’t actually know about that, but that’s the end result.
[1]: I apparently deleted the post so I’ll have to undelete it and come update this comment.
It’s called separating interface from implementation. When using an interface, only make assumptions on what the interface specifies, and not on how it may happen to be implemented. When implementing an interface, only make assumptions on what the interface specifies, and not on how it may happen to be used.
this is a good summary of why I pay zero attention to power point decks and blog posts. there is never a right answer and it's just people masturbating in circles. it's not rocket science, half of the time it's not even computer science. design patterns, at a certain point, become full of themselves.
Eh, I'd argue the amount of functionality has if anything stagnated.
Computers today don't do significantly more than they did 25 years ago. They just do the same things with 1000 times more resources across a global computer network.
Likewise, velocity is startlingly bad if you follow conventional advice.
>But do you know what helps? Having physical boundaries you can't cross so easily.
Rarely have I disagreed with something more strongly :)
If a team lacks the discipline/skill/wherewithal to create good boundaries within a single code base when stakes are at their lowest, they absolutely don't have the skill or discipline to do ahead of time, across N code bases, and with a network in between all of them.
Breaking your system apart too early, and when you know the least about it, and with the rationale that doing so will magically stop the problems that a lack of engineering skill creates, has, at least in the small slice of the world I've encountered, resulted in nightmarish levels of complexity (some of which I'm directly responsible for the reasons stated! (whoops!)).
Why do you need to do it ahead of time? Typically what actually happens is you build a monolith, then note that one team needs to own one chunk, so you split it out into its own service.
For example from my experience, splitting an Auth/User service for security/compliance, or splitting a Payment service for the same reasons.
Or splitting out a core low-level platform layer that per-product teams can build atop, allowing each to iterate faster, while the Platform layer focuses on clean abstractions.
Nothing in DDD or microservices says you need to sit down and design the service boundaries up-front, indeed most of the advice I’ve seen in the community suggests the opposite.
You can disagree with it and blame the developers, but the fact is that DDD fails basically every time, unless physical boundaries stop you from failing. It's like saying a good driver won't have an accident on a sharp corner of an icy road, but yet you have a ton of accidents on that corner on an icy road. Facts are facts. Just as it's a fact you can't hope every organization to be staffed by 100% geniuses who never make a mistake.
Part of the problem is the platform itself.
Most mainstream languages today ALL support shared mutable state. Meaning you can hold a pointer or a handle to something, mutate it, and someone else having the same pointer or handle has it changed under their nose.
When you fetch, say, JSON over the network, the party who sent it to you can't change it under your nose either intentionally or accidentally. It's a snapshot of data you own, and you can make decisions on at your own leisure. Sure, the snapshot may get out of date by the time you send your next API request, but this contract is clear and obvious.
While the entangled meshes of mutable code in say C, C++, Java, Swift, Python, JS, etc. etc. often make the mess unavoidable.
Rust goes to some length to stop this problem, but it's a language that's low-level (by modern standards) and technical, and I don't think anyone uses it for enterprise automation exactly.
Another problem are nominal (vs. structural) type systems. Well I don't have to explain, but when you fetch JSON from an API a nominal type won't stop you from reading that data, you care about the structure.
As far as I know developer. They will mess with all kinds of technology given to them.
You say side effects are a problem but you also say the developer cares about his structure in microservices.
When you can’t handle side effects how will you handle microservices?
You could do the same without microservices by just abstracting in libraries which have a defined API.
Microservices will normally make your application more complex. Developers will mess with them too. They can’t change the JSON but some dev will change the API of some service and read the wrong property…
> unless physical boundaries stop you from failing
It is also possible to just police it. Supervise it. This is partly anathema now with agile etc. But the iPhone is what it is because there were a few dictators at the top saying "no". This is actually more effective than a physical boundary because:
1) People will hack around the physical boundary anyway.
2) It forces a conversation between the supervisor and the developer every time the supervisor raises an issue. At this point the developer may get a chance to learn something about the reasons for higher level structure. But more importantly the supervisor might learn something about what is wrong with the higher level structure.
> It is also possible to just police it. Supervise it. This is partly anathema now with agile etc. But the iPhone is what it is because there were a few dictators at the top saying "no".
But then your project design is now dependent on org hierarchy, and getting the right kind of "people". This isn't guaranteed or easy to do, so I would not consider it a serious situation.
They solve the mutable shared state problem that I'm specifically talking about. Which is that it's clear which state you own, and which state you don't own.
"Shared" implies shared ownership, this is where the confusion comes from. When you ask an API about a user's profile, it's that API's user profile. But the API response itself is entirely yours. It won't change right under your fingertips. The original profile may change, but you know that the profile is not yours already.
Do you religiously deep-clone every container you pass to a function? No. So it's one thing to say you can copy your data, another to know your data is guaranteed to be copied.
Swift has value types which use pass-by-value semantics. Containers use copy-on-write. This is a language where you can quickly build value trees that you know are a guaranteed copy, isolated from whoever sent you this tree.
But most languages have no such infrastructure. Java is adding record types, but they're immutable instead of in-place mutable, which is incredibly inconvenient to work with in nested data structures.
> Rust goes to some length to stop this problem, but it's a language that's low-level (by modern standards) and technical, and I don't think anyone uses it for enterprise automation exactly.
Java's new type offerings are immutable (records, with value and primitive types in the works). Hopefully a push in the right direction.
> once a team is big enough, all the invisible conceptual boundaries between domains blur and disappear
Conway's law:
Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.
Precisely. But this is how it should be. The service is a part of the team, but automating the "mundane things" that people can delegate to machines, instead of doing them manually themselves.
This is why, BTW, I'm so dismayed Elon Must fired 85% of Twitter. Now many of their critical services have teams of zero or one developer. Those are basically dead services walking.
> Precisely. But this is how it should be. The service is a part of the team, but automating the "mundane things" that people can delegate to machines, instead of doing them manually themselves.
The organization is often not reflective of what’s actually being achieved. Instead of Catalog, Offer, Order and Shipment you have Alice’s org, Bob’s org and Charlie’s org. If you encode the interfaces between these (impermanent) group boundaries rather than the conceptual boundaries you enter a world of pain.
Charlie might own Order and Shipment because Charlie is the warehouse exec and 20 years ago, when only physical items were sold, all of the order-processing computers were located in a cage in the warehouse. Fast forward to today and the cage is gone, and the order processing system handles digital items and services in addition to physical goods. But good old Charlie still owns the Order and Shipment system because they were baked together 20 years ago.
This is true, but I think things like DDD provide an antidote (or at least, some pressure to organize along different dimensions).
If you are explicitly talking about business domains and bounded contexts, then you can see areas where the Alice/Bob split is not optimal (typically, lots of team coupling across org boundaries). Without those concepts I think it is easier to justify arbitrary political/territorial org structures.
There is no panacea; one can always provide contorted justifications for things. But I think having a framework really helps, and particularly, having one that includes business stakeholders and not just pure technical functions.
"Microservices" are not just an acknowledgement of this law, they are a full blown exploitation of it. They are (supposed to be?) literal implementations of your organisation's boundaries/domains.
Interested in having the quote as well, and ideally the presentation/keynote/interview that came with it.
From my experience, breaking down a monolith without first identifying and isolating its bounded contexts (as in DDD) is a recipe for disaster. With your domains completely entangled in a single codebase, you won't pull only what you want and you will have to bring a lot of undesired code, behaviours and side effects with it.
The safest recipe, in my opinion, is to break down your monolith in bounded contexts (or ideally, to have focused on it from the start), so you can better reason and understand the advantages and trade-offs of moving each specific domain and responsibilities out of your unique codebase.
I saw often teams being motivated in moving their code out of a legacy codebase because of the huge technical debt, but they were bound to suffer a death of a thousand cuts by just bring the same debt in a new service, and adding network constraints on top.
That software and organizations have boundaries and that they matter is true but is too obvious to be interesting. DDD has little to say about the most crucial aspect of those boundaries - where the boundaries go and why.
There is a shared semantic model both inside a boundary and outside. However, DDD has relatively little to say about the most crucial part of that too - how to shape and define that language.
DDD does, however, have a lot to say about how to misapply certain design patterns.
> DDD has little to say about the most crucial aspect of those boundaries - where the boundaries go and why.
The whole point of DDD is that there can't be a one-size-fits-all answer to these questions. Instead, they give you some questions you need to think about, and tools that might help you find answers.
The whole point appears to be that there IS a one size fits all set of design patterns once you've got the right "bounded contexts" and the right "ubiquitous language".
It doesn't have anything interesting to say about either one of those things though. The whole idea appears to be a set up for a bunch of design patterns.
There is definitely a part of the DDD community that wants to do it that way. I call it “domain driven design driven design”. But the big names in DDD have all explicitly said that’s wrong.
I think most orgs don’t act this way, which suggests to me that it’s not actually obvious.
> However, DDD has relatively little to say about the most crucial part of that too - how to shape and define that language.
I don’t think this is actually true. (Relative to what? Other architectural systems?) It strongly advocates for a process of identifying Entities, by iteratively working worth the domain experts. The core entities should be domain objects not technical concepts. (The opposite conceptual pattern I often see is making the core objects generic functional/behavioral/logical operations with the business rules as config that’s passed through. Maybe more sensical for pure technical users, but gibberish for business users. Perhaps valid for building a generic business logic platform like SAP though!) The concept of Ubiquitous Language mandates that you have a dialog with domain experts, and keep doing that as the domain evolves. The concept of a Bounded Context gives guidance on how to judge between different potential splits in your domain, which is more than most architectural frameworks offer. All of these mean you are better placed to agree on boundaries that make sense to the business/domain as well as technical functions.
I think Aggregates and Repositories are a good low-level concepts for working with Domain-driven architectures, but they don’t participate in the higher-level organizational discourse.
My definition of DDD involves drawing up a series of relational tables in excel and reviewing those with the business stakeholders.
If the product is like most, I'd then convert those excel workbooks to SQL schemas and start building some vertical slice demos.
None of this has anything to do with micro services, source control, etc. The schema (domain) is the most important part of the product. If your manager can understand it and you have a common language for communicating about it, you might actually have a viable business.
Good data modelling is a big part of it but it’s not keeping you from many potential problems.
If you have several unrelated teams accessing and modifying the same tables, you’re guaranteed to have some headaches the moment one needs to make changes without affecting the others.
So interfaces isolating those access points to data become a must, and having clear owners of those access points that can maintain the interface contract or break it in a safe and coordinated manner is a godsend.
From that point on, the separation of code ownership kinda follows organically from data ownership.
Microservices are absolutely not a necessity, but in non disciplined organisations they tend to help enforce code and data ownership. It’s easier to keep a team from taking shortcuts and importing a piece of code or directly accessing data they shouldn’t when it is not technically possible to do so. Does that warrant the extra complexity? Debatable.
Years ago, when we just founded our company, I asked an experienced CTO: 'In your experience, what is the most expensive thing to adjust later on?'. He said: 'Changes in your datamodel'. That advice worked out well. We spent extra time designing our datamodel and it saved us quite a lot of money / time on refactoring later.
So making schema discussions as concrete as possible with Excel together with business stakeholders sounds like a smart thing to do.
It sounds like you started designing system from database perspective
But doesn't this "skew" your domain model in code by affecting your thoughts by technical details?
I'd prefer start with some event storming session? visualization of system and connections
then design proper domain model, correct abstractions, events between those modules inside system
then just implement persistence layer with mapper from domain model to database model which in the case of SQL contains those technical details like cache fields, FKs, etc.
1) The database schema is the fundamental model of the business, and all application logic is built on top of that.
2) The application logic is the core and the database is just a utility for persistence of data.
I'm firmly in the first camp and assume the GP is also. I believe data is more crucial and will last longer than any specific application. Thinking of the database as mere "persistence" for application logic is putting the cart before the horse.
I also believe in Fred Brooks:
> Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious.
It's not that one of those is correct and the other wrong. The issue is that one of those can change very easily, but it's much harder for the other.
Also, data definitions are entirely language based, so if you doing DDD, most of what you need to discuss with other teams gets formalized on the data model, not on the logic.
Exactly. It's interfaces all the way down with plain data being the simplest interface at the bottom. (Sometimes if a part of the domain is just about data storage and retrieval then this interface is enough)
I think the OP is leaving out some of the other steps they also take. The tables give you common language for talking about the entities involved in the business logic that actually does the work.
Though I think I agree with you anyway, it is a chicken and egg problem. You can't know all things you will need schema for until you understand the businss logic requirements well, but it's hard to talk about those requirements without a common language for the entities in the business.
It is inherently flawed though, isn't it? It assumes you're in just one big schema, surely? Part of DDD is breaking that assumption and fragmenting your systems into the disparate domains withing your org, fully encapsulating your domains' functionality and data.
As someone else said, it's probably better to start with what events drive your org. What are the communication channels and triggers for those events.
> It sounds like you started designing system from database perspective
> I'd prefer start with some event storming session?
Right - We don't literally start with a blank xlsx file on day 1 of a new product or big feature.
The first conversations are about abstract outcomes, market expectations, and eventually more concrete user stories. Once we are talking about what a user experiences, we can begin to accumulate types/properties/relations.
I have done similar prototyping in docs, whiteboards, CSV files... The flow is to focus on the data and what are the constraints(rules) on updating the data. The prototyping of api's, screens can come later off of a firm foundation of data, calculations and rules for creation and change.
Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." -- Fred Brooks, The Mythical Man Month (1975)
Not the OP, but I’ve tried this before too. A huge benefit is that the everyone at the table understands excel and isn’t intimidated during the conversation. Understanding the domain as an engineer is the toughest, most important, and most rewarding part of DDD imho.
Check out the relational data modelling capabilities in the Power Pivot Diagram view. It helps you graphically see the relations between basic excel tables.
DDD is something not a lot of devs know about, but then every single microservices concept/pattern/architecture is basically some sort of applied DDD. Do I like the philosophy? Yes and no. Yea because it's been a great source of inspiration for me. No because I think Evans, just like Martin and a few others is basically nothing more than a businessman, trying to sell books and workshops and conferences, while being completely detached from the reality of production code, deadlines and code maintainability.
The concepts are really interesting, but I'm pretty sure none of the suggested implementations are remotely acceptable in a professional environment.
Early in my career, I really bought the stuff the Agile founding fathers promoted (Martin, Uncle Bob, Beck, etc). I tried bringing it into my code and pushing it on my teams, but it never went well. I tried finding great examples they've implemented to use as, well, examples - but never found anything. Turns out, these gurus of coding almost never release any open source code to be scrutinized outside of toy examples in their books.
I've realized the reality is that many of their suggestions are actually pretty reasonable, but they take time to implement. Most business software is written under a relative time crunch, and the time required to slow down and properly implement them is excessive and unaffordable.
More specifically, in the case of DDD, most businesses cannot afford to make every developer a domain expert so they can properly refactor to the DDD guidelines. The required think time and one-on-one time with an expert to learn the domain would be far too costly.
Further, it pushes a different type of complexity into the implementation that a non-expert cannot understand, which actually slows down future development when new hires fumble the domain. Thus, new hires end up having a longer ramp up time with less productivity and more handholding during it.
I'm pretty hungry for this level of philosophy in software design, especially when I see us all too frequently just take a loose blog summary of a hot pattern and start building systems with it.
I work with a company that tried to do DDD microservice patterns. It now seems clearer that the team has started to drive towards a practice of sometimes making a new microservice when a new Entity emerges from requirements with no evaluation of whether it is an "Aggregate Root" or other potential analysis.
Using the examples in this article, they might have made a "LineItem" microservice along with their "Order" microservice. This seems obviously awkward when thinking about doing this for something more commonly understood like "Order" but in our more niche domain model it isn't as obvious. So that's great to have some language/categories to point out that "LineItem" is not a an "Aggregate Root" and thus it is a indicator that it may not need to have it's own bounded-context/microservice. "Order" is an "Aggregate Root" and so it is a stronger candidate for it's own bounded-context/microservice.
I think what you describe is a mis-application of DDD. One service per entity is way overkill and nothing like Evans would advocate for. As you note, the whole point of Aggregates is that you can select all the sub-entities in one DB transaction, which you lose across service-boundaries.
One domain service per Bounded Context is the starting point I usually see. And you can decompose into “private microservices” implementing components within a BC as needed. You might also have many “infra services/components” implementing that logical service for the BC. (For example, a service might have Django API, RabbitMQ, Postgres, Redis, Celery components. And you could split an auth microservice from the API if needed for isolation. But the external Service API remains the set of operations that make sense within a BC.)
It sounds like this company might be approaching “services per engineer” instead of measuring “engineers per service”, which is a red flag for me. There is alot of overuse of the microservice pattern as part of the hype wave.
I suspect this gets resolved at this company by composing on the UI a ton. The UI can reach in and get Orders from the Orders microservice, then reach out to the LineItem microservice. Before this post, I've argued at it the other way: "A LineItem will always require an Order in context therefore maybe LineItem should be part of the Order service instead of it's own thing."
Since this isn't the first time I've seen this heavy UI composition in my career, I suppose it's time to coin and describe this DDD antipattern :)
Yeah that "engineers per service" equation seems alarming. I think we're essentially at a 1:1 ratio at a glance ; not evaluating whether there are "private microservices."
- we did the ubiquitous language part, working with clients to arrive at consistent, particular language to describe the problem
- we then made a big chart of things and their relationships; get clients to sign off this describes problem
- but we didn’t bother with any formal steps passed that; we just took the chart as a systems diagram and implemented that
There’s some deep sense in which any program which encodes the chart is a type embodying the semantics of the chart, viewed as a categorical diagram. So it’s not too surprising this works out alright.
I think the ubiquitous language part of DDD is great; I’m not sure it has much to say about turning domain diagrams into software.
> Using the examples in this article, they might have made a "LineItem" microservice along with their "Order" microservice. This seems obviously awkward when thinking about doing this for something more commonly understood like "Order" but in our more niche domain model it isn't as obvious. So that's great to have some language/categories to point out that "LineItem" is not a an "Aggregate Root" and thus it is a indicator that it may not need to have it's own bounded-context/microservice. "Order" is an "Aggregate Root" and so it is a stronger candidate for it's own bounded-context/microservice.
EJB 1.0 specification made all entity beans remote. This lead to every interaction with every entity in the system cross a process boundary, the way you describe here. This was soon recognized to be an anti-pattern. Soon, a pattern called "Session Facade" was identified - the remote interfaces are coarse-grained operations, working on aggregate roots.
Microservice-per-entity is EJB 1.0 all over again.
IMO there are two good and interesting ideas behind DDD - the notion of an ubiquitous language and bounded contexts. However, these concepts are vague and underdeveloped. I'm not even sure they were original.
Meanwhile it gets way too specific about certain design patterns. These are not as generally applicable as the originators imagined and have a tendency to blow up SLOC and add too many layers of indirection.
DDD sounds amazing, but y'all the pull requests are usually a dozen files with tiny changes. At least from what I have seen.
And the promise is "change is easy" or "impact to existing code is low." This may be true. Is it?
I have the Evans book. It may completely change the way I write code. But I also have "C Interfaces and Implementations," and that promises another. Yet another is Stepanov's "Elements of Programming," which by first chapter's end Feynman is telling me to read it again [1].
We just want to make Good Decisions about the code, right? Strategically (architectures) and tactically (pure functions).
I wonder if domain stuff surfaces naturally in SQL. Tables as types and subtypes. "Now build an app around me" feels like a different take from the usual "Now we just need the persistence layer."
[1] No Ordinary Genius: The Illustrated Richard Feynman
> I wonder if domain stuff surfaces naturally in SQL.
Only if your domain is very simple. Database types are not very expressive.
I wonder what is the minimum one must add to support actual people processes. Sum types and interfaces are a must, some kind of recursive namespace is clearly lacking, but those still don't seem sufficient.
Anyway, SOA is something that leads to a kind of segmented DDD. But the software development culture made such a mess with it that you simply can't do anything useful anymore once you acknowledge you are doing SOA.
Function farms. You implement functions as needed, making sure everything halts. The challenge is some users will certainly ask for loops or persist state, and you've handrolled an embedded no-code product.
Even in VBScript, one could chain functions (named as strings):
Dim Workflow : Set Workflow ' instantiate
.Step("SetupStuff")
.Step("DoThisThing")
.Step("DoOtherThing")
.Step("Customer1CustomFunc")
.Step("Customer2VerySimilarFunc")
Internally, the class enumerates in a loop for each record, so you save a level of indentation.
Sum types. I think this could be done with a table representing multiple types [1].
For example, an Organization table requires a Name field and represents subtypes Legal Org or Informal Org.
Legal Org subtype lives in the same table and uses Federal Tax ID Num, Corporation, and Govt Agency fields.
Informal Org uses Team, Family, and Other Informal Org fields.
> The subtypes within an entity should
represent a complete set of
classifications (meaning that the sum of
the subtypes covers the supertype in its
entirety) and at the same time be
mutually exclusive of each other (an
exception ... covered in the next
section).
Interfaces. Stored procs as functions could enforce preconditions. That could be aided by unit tests. Or a test generator, because checking for fields and field types should be straightforward, since the table defines them up front (static).
Recursion. Recently, Google says Rune lang supports something "easy" to do in SQL [2]:
> [Hierachical structures like family
trees] can be tricky to model in some
languages, yet is trivial in both SQL
and Rune
Pretty vague, but I know stuff like org charts can be done with CTEs and recursion (plus sweat and StackOverflow).
Is that what you mean by "recursive namespaces?"
> SOA is something that leads to a kind
of segmented DDD
> made such a mess with it
Yes, we needed to reinforce it at the infra/deployment level to get any meaningful separation. But as soon as a deadline looms, environments start to blur...
[1] Silverston, Len. The Data Model Resource Book (Revised Edition). Volume 1. Wiley. 2001.
> DDD sounds amazing, but y'all the pull requests are usually a dozen files with tiny changes. At least from what I have seen.
When I've used it, a typical PR is about a Query or a Command, that calls a repository to retrieve an entity maybe performs some logic using that domain entity and stores it afterwards, three layers, application, domain and infrastructure. If you have more changes that those 3 layers (I'd say 2 files minimum, maximum about 6 ? ).
Maybe the feature / user story is not granular enough ?
That sounds reasonable. Maybe the story included some extra things.
Comparatively, it's a regular occurrence to add at least three files for a Cucumber BDD test:
1. The .feature file and one or more
scenarios.
2. Adding one or more step defs to one
(or more!) step def files.
3. Adding a new page object file or new
methods to an existing page object.
4. Possibly, add them to a component file
and *then* call that from the page
object.
From that perspective, seems the number of files changed is not bad.
I wish I had a representative example on my end. We would be able to see quickly. On the other hand, maybe there was cruft or other extras associated with just starting out with DDD with the existing conventions.
I remember there was another thing that was an issue: updating an aggregate object (?) required manually comparing each field. So the method was having to do that, spanning a page of code.
On balance, DDD affords opportunities to create maintainable code.
Do you have any experiences where the codebase was in a badly-structured form of DDD?
(Given agile development, there are good and bad ways to execute it, for example.)
>Comparatively, it's a regular occurrence to add at least three files for a Cucumber BDD test
This is partly why I so dislike Cucumber.
The gherkin language is insufficiently expressive for most purposes and the syntax for parameterization is just...bad. Thats how gherkin stories end up being a glorified label - it's not an intrinsic BDD problem.
It can be difficult to wrangle Gherkin. I've tried to use "Writing Great Specifications" as a kind of guide. Reducing specs to (mostly) one each of Given, When, and Then seems to reduce UI steps.
Having agnostic specs is nice if the team moves from web app to, say, React Native: should still be the same feature set.
The con is QA looking for step-by-step test instructions have to look elsewhere (at the "code"). And, I still haven't seen a real example of QA, PM, and dev working together to write Gherkin :)
Tooling can help for some warts. Gherkin is line-oriented, so a parser is easier to write. Using ex or ed, one can generate a clickable webpage of specs to kick off a test.
Interestingly, the book above recommends DDD as a way to organize test specs. One suggested exercise is to highlight words from a spec and label them as one domain or another, and determine the core domain versus secondary ones.
Of course, one chapter or a few cannot describe DDD, but interesting to see that decomposition.
Oh--just that I couldn't understand the first chapter in Stepanov fully on first reading, so I need to read it again (and again, and again).
The Feynman reference is
Well, I asked him, “How can I read it?
It’s so hard.” He said, “You start at the
beginning and you read as far as you can
get, until you are lost. Then you start at
the beginning again, and you keep working
through until you can understand the whole
book.”
—Joan Feynman, Richard Feynman’s sister,
recalling a discussion with her brother
For myself, I’ve learned to avoid dogma, but not let it get in the way of learning. I’ve found that I never stop learning new stuff, and realize that I’ve “been doing it wrong,” more often than I like to admit.
Frequently, I have found that “The New Way!!!” actually just formalizes or names a pattern that I’ve been using for some time.
I’ve come to learn that common sense tends to have obscure provenance.
I'm divided on DDD. On one hand there are some good ideas (system is model, language is important etc) that seems "obvious", but I guess still worth explicit mention.
On the other hand there are implementation techniques that strike me as not really Domain Driven, but I Program In Java With Hibernate Driven (Entities, Value Objects & Aggregates section).
I wonder why those two are married other than historical accident.
I think there is general acceptance that the implementation details in Eric Evans book could almost be completly skipped. The concepts of domain boundaries and the ubiquitous language are the useful if seemingly obvious bits.
See, the thing with DDD is that it is VERY complex. I mean, "the" book on it (Evans) is beefy. Putting all those ideals in practice is hard, especially when so many of the ideas are not necessarily to do with the act of software development - No, they are social concepts, the way a team should approach the business and people outside of the tech teams.
That's why every single implementation of DDD is different. Every company that "implements" DDD does it differently. I've seen this a few times now.
For all the Python devs reading this, a book that aims to cover DDD from a Python perspective is Architecture Patterns With Python [1]. It's free to read online and all the code can be found in Github.
Funny coincidence: just one week ago I and a colleague of mine started with "pytest-arch" [1], a pytest plugin to test for architectural constraints. On purpose we kept it very simple. It is already usable and works well, at least for our use cases.
You can use it to check e.g. if your domain model is importing stuff that it should not import.
It's very easy: if your software gets complex enough you abstracted so much stuff (service layers implementing common actions) that everything is interwoven. You're blocking yourself from iterating. You can't react to changes in requirements or do isolated refactoring. If your logic is encapsulated (and with a sprinkle of hexagonal) you have no problem. If everything is connected you touch lots and lots of features. Also a problem is feature retirement. With DDD you just delete the class/file/function.
On thing i often hear in the DDD universe: I do not want to know what the underlying database is doing. Just wanna safe my stuff. And so it goes that you have slow queries, both on sql and nosql databases because the programmer does not want to use the DB's own features correctly...
I think about this a lot, and I think my core beef with stuff like this is:
- Engineering team has a problem
- Engineer/Lead/Architect reads a book
- Whole team builds an in-house, bespoke framework around the book's ideas
- They now have 2 problems
I'm very against in-house frameworks; I think they almost never deliver on value, and you should just use Rails/Django/etc (again, you almost certainly won't outdo them).
But if you use a framework that implements DDD (or SOA, or whatever), I'm very OK with them. For instance, I think that Django is basically DDD/CQRS/SOA: views are the service layer, models are entities, querysets are repos, REST is defacto JSON-RPC, etc. And you do start to realize the benefits: engineers aren't bogged down with irrelevant stuff like request (de)serializing, they don't have to make architecture decisions on the fly, you don't have to document/test the framework, etc. etc.
Frameworks are not architectures, and architectures are not frameworks. DDD is an approach to building an architecture that emphasizes domain modeling. Done properly, all the web gunk (and there’s a lot of it) can be mostly isolated from the actual interesting part: the domain being modeled.
Until we as a profession understand how to structure code and how it is different from simply choosing Django vs flask, these discussions will not be productive.
I'm not saying they are; I'm saying you should outsource the building of a framework that implements your architecture of choice when you can. I would also go further and say that if you can't, you should pick a different architecture where you can.
I hear you. I just don’t trust frameworks to actually sell you on an architecture that benefits you more than them. Thus you don’t get full benefits of said architecture because you’re using a sanitized version.
Sure you can do DDD in Django. But I do not find that framework inherently CQRS of SOA (not SOA due to it's monolithic nature).
Maybe a monolith on BEAM (Erlang, Elixir) could be marked as SOA due to the nature of BEAMs concurrency model.
> engineers aren't bogged down with irrelevant stuff like request (de)serializing
Last Django app i saw sure had a lot of this boilerplate going. I saw this talk and thought: well this is finally a "framework" that allowes me to do away with serialization (as in: in this framework the amount of extra work you do for making the app an SPA is fairly minimal):
> Sure you can do DDD in Django. But I do not find that framework inherently CQRS of SOA (not SOA due to it's monolithic nature).
Django has "apps", which--while they do run in the same process--aren't intended to use code from each other. They're supposed to have their own models/views/templates/migrations/etc. They're effectively different services, as long as you don't think a service has to be available at a different network address to be a different service (think of mounting different apps or microservices behind different URLs, for example).
Also, I wouldn't say Django is CQRS--CQRS is more or less a different phrasing of "JSON-RPC", which "REST" has become.
> Last Django app i saw sure had a lot of this boilerplate going. I saw this talk and thought: well this is finally a "framework" that allowes me to do away with serialization (as in: in this framework the amount of extra work you do for making the app an SPA is fairly minimal):
Eh, some people are highly allergic to any boilerplate, but like, the DRF example [0] has practically none.
That Elm video is interesting, but I'm skeptical of anything whose central claim is "I can easily turn front end data requests into SQL and back again in very few lines of code". There's just a lot of inherent complexity there, and my evidence is every mainstream ORM, plus all the "backend-as-a-service" products that are multi-1000s LOC. Then again I think SQL is a great (legendary, honestly) language and we should stop trying to replace it.
> I'm skeptical of anything whose central claim is "I can easily turn front end data requests into SQL and back again in very few lines of code".
Same for me. The Elm video is showing off a toy, an interesting toy, but a toy. The presenter also says it's for now only useful for fun projects.
Something with a more mature backend in the same direction would Hasura (a GraphQL and authorization layer over Postgres) with a generated-from-the GraphQL-spec client library. A generator for Elm exists (which give you strong type-safety over the API barrier).
It does lead to a situation where SQL barely used. The GraphQL used to query the db is very SQL-like though :)
I absolutely love Hasura. I basically never want to write a backend API again after using it. But yeah it is a GraphQL to SQL compiler (and yeah all the attributes are like "where", "aggregate", it obviously is SQL-in-GraphQL), which... yeah let's just use SQL everyone, come on.
It's good if those frameworks are very high quality and very thoroughly debugged like django is but if you have to spend an appreciable amount of time peeking under the hood they quickly become a nightmare.
I think about 9 out of 10 home grown frameworks end up in the nightmare bucket.
I'm super skeptical of frameworks that dont really "do" anything also. Django handles web and database and saves you from a whole load of boilerplate crap but any DDD framework would basically just be an opinionated code mold, a bit like all those dependency inversion frameworks. 100% straitjacket but with none of the boilerplate written for you.
DDD is just microservices without head of line blocking not sure why everyone hates it. Instead of having 5 microservices you just build an umbrella app exposing the same 5 API. Calls into the umbrella are now at microsecond speed vs 10s of milliseconds in most (run of the mill publicly available) clouds.
We had and in-office fad of DDD pre-pandemic, it produced some awfully performing designs, mainly due to data copying or translation to the domain objects.
They look fairly pretty and are relatively easy to modify and understand but it's a pain in the ass to make them run fast if the cardinality of the domain objects is really high.
> Be careful with Domain Driven Design in high/performance or data intensive domains.
This is simply untrue. I’ve used domain driven design in trading systems and trade matching engines which have worked at millions of messages a second without issue. This is actually the sweet spot for DDD, provided you read it as business advice and not technical patterns.
I really liked the reference to platonism at the beginning.
I feel like at times our problems in the west can be rooted in relying to much in Aristotle and empiricism and not enough on platonic thinking (you could swing too far one way though).
DDD’s ubiquitous language IS the key as he says. Political discourse is the king of lacking ubiquitous language, though that seems more intentional, so we can use words intentionally meaning different things to one group or another. The result though is an inability to work out problems. Engineering organizations at least are not intentionally using different language, but the result is the same, an inability to solve problems.
The problem with following a list of principles inside of a book is: It's hard and requires a holistic view of the entire manuscript which everyone has to remind themselves all the time.
This is much more compelling:
Why domain driven design? Because we have tools that detect deviations and give suggestions for expanding your models.
Just like Rust constraining ownership, just like ruby on rails generating entities, controllers, css html and js for your model with a single command.
In other words, writing rules is easy. You are just leaving tooling (the most useful and hardest part) as an exercise for the reader.
I'm not a DDD advocate but it worries me that the blog author has not referenced the 2003 Eric Evans book "Domain-Driven Design: Tackling Complexity in the Heart of Software".
I think even Eric Evans himself have said that the book you mention brings up concepts in the wrong order. By focusing on small details first, it has confused people into thinking those small details are the important parts. The important bits are the ones that come last in the book.
Well, Evans and Vernons books use OOP. And when I was interested in the topic (actually it was event sourcing, which seems to be near 100% married to DDD, if you search online resources), I found only blog postings using OOP.
I found it funny, too, because IMHO event sourcing is fundamentally data driven, so FP would be a more natural fit.
From experience I've found it easier in FP languages to get right, and have them stand a few years than standard OO without it being a maintenance headache having written a few of them. So yes - I figured the same thing.
Scott Wlaschin wrote a good book on this topic “Domain modelling made functional” and you can find his talks and articles online. I think this is a decent introduction, https://techleadjournal.dev/episodes/79/
In my experience biggest problem with DDD is that many developers are just not interested in business problem they are designing. They want blanket approach, that will allow to quickly get over design phase and get to "fun" technical problems.
DDD requires you to understand problem, find pieces that work together (have to be saved in single transaction or world burns). More importantly find pieces that doesn't have to be saved together and nothing bad happens (which is harder because it's so nice that we can setup foreign keys for everything and have always consistent data. Too bad that it isn't and doesn't need to be consistent in real world).
There's lots of opinions in these comments I heard over the years working with DDD from inexperienced developers.
* DDD equals microservices - completely wrong - you will make mistakes while dividing domain. You should start form modular monolith.
* DDD should start from relational database model - nope. You will lock yourself in unnecessary relations.
* DDD requires big pull requests for trivial changes and that's a problem - first of all, most time is spent reading code, not writing. 2 minutes more for better readibility is worth it, even if you don't like it. Secondly - it's mostly for trivial changes like a new field. Logical changes that are really hard to grasp usually are contained in a few domain classes.
* No team is able to maintain separation between domains - many teams are not able to maintain ANY architecture at all and we end up with mess anyway(often because architecture didn't care about businesses requirements and then developers complain that "bad business made them to make bad choices"). Doesn't mean we shouldn't try.
In my experience DDD done correctly is only solution that reduces cognitive load while developing big systems. However if you prefer to live by "two weeks of development will save you two hours of planning", then you aren't going to like it :-)
"In essence, all principles help you to model your software in a way that’s highly cohesive and loosely coupled- They are the building blocks of a well designed software."
My suggestion, stay away from all these design patterns. In my experience, they lead to overly complex code because of all the structure. The best structure is the most simple one.
"When you start thinking in abstractions and create abstractions in your code, either top down or bottom up, you will end up producing a good software design."
This, to me, is the most untrue statement you could make about software. Abstractions introduce more complexity, the more complex, the less stable and maintainable your software becomes.
If you want well designed software, you need to keep it the code simple, so its maintainable. Then add some tests to it so you are certain it works correctly. Working with a type strict language is also recommended. To me, this is the only way (right now) to build stable software.
Keep LOC low, keep file count low, stay away from abstractions and hidden code (unless it's very useful somewhere), keep the amount of types/classes low. TL;DR; you dont want a "code maze. aka. 10 million lines of structured code".
There is truth in both of these statements. But also:
"You can make a dog house out of anything" - Alan Kay
What he's saying is that you can't make a skyscraper out of anything. You need some structure. The trick is to get the amount of structure right. Maybe Ousterhout has the right direction with "deep modules"
What's the difference between Domain Driven Design and Problem Oriented Programming?
I feel like Chuck Moore was talking about this back in the '70s when he wrote Programming in a Problem Oriented Language. Forth has been largely overlooked probably because it lacks what people consider necessities in modern programming languages (like compiler enforced typing by default), but you can't deny the power of Forth and other concatenative languages for their ability to encode the domain.
But i never use domain, driven, design,... as vocabulary. I use function all the time. "Hey guys, just use function, that's all you know to make usable software."
What's the problem with NOT following "weird, confusing" patterns and vocabularies.
And the result is, all of my students actually make useful code, simple code.
Also, most of software recruiters will tell you: You lack of experience in making complicated/complex software ! No, it's you lack ability to produce simple software, it's your fault.
Teaching-scale and production-scale programs are very different. Teaching-scale, where students start with a blank sheet of paper, are better served by simple approaches. You can't put enough complexity in there to warrant a complex framework.
I would like to see a course with a "maintenance" module, where students have to make changes to an application that's maintained over the several years of the course running. Including dealing with mistakes made by previous students.
Breaking complexity, and ensuring it stays broken. "This data is read only, you shall not corrupt our state, deal with it" is an even more important design pattern than a shared domain vocabulary.
You surely don't know if there exists a practical real world application which only uses function. (Because you don't have access to codebase of all real world applications).
That's why your statement is logically wrong. I have no more word to say here.
I never said that. I said you can’t _only_ write functions. If you model properly, you absolutely can have a purely function based system. But if you skip the modeling and listening step, you will 99% end up with junk.
Hehe, actually, i intentionally use the "useful software", instead of perfect, good design, scalable,... software. If you skip that details, all furthur logic will be in wrong context.
Domain Driven Design is a poison. The book is one of the most poorly written technical books out there. There are a handful of good ideas buried in the 1,000 page unedited verbose rambling slog. The whole book should only have been 10 pages.
Besides Eric Evan’s inability to write, the poison of DDD comes from locking in businesses/domain concepts into your core technology, making them inflexible and making it difficult for the business to iterate on new ideas. This is a very good article on why you don’t need it (and the author’s example is healthcare, which has complex business domains). https://dev.to/cheetah100/domain-driven-disaster-147i
Something snapped in Evans’s mind around his over focus of Java and UML that made him try to force every idea into a class hierarchy, and force that model onto software development as a whole in the form of DDD.
I worked on a number of software projects with Eric and got a lot out of his book before that.
I'm sure you have good reasons for disliking DDD and I bet Eric would probably agree about most of them. This is what happens when ideas spread broadly. They end up getting applied in ways the originator never intended. (Jung reportedly said "thank god i'm not a jungian")
Eric's a fine writer and one of the smartest and most interesting people I've met. Nothing "snapped in his mind" and I can tell you for certain that he never tried to "force every idea into a class hierarchy" (quite the contrary! - he's very much a programming pluralist and spent a few years working in Clojure for that very reason). Nor would he ever do something as crude as trying to force a model (any model) onto software development as a whole. He's far too inquisitive and flexible a thinker.
By the way, Eric's way of programming and of thinking about programming was formed in the Smalltalk world, well before Java existed. Like a lot of the Smalltalk diaspora who ended up working in Java (and experiencing it as a kind of exile from the powerful and flexible Smalltalk environments they were used to), he thought deeply about what the differences were and how the Smalltalk design culture could be (partly) recovered and reshaped for these other technologies. A lot of creative work came out of that, not just DDD (e.g. Ward Cunningham's work, which led to Wikipedia).
I read the whole book, and I think my gripes are with Eric’s core thesis. Modeling your software around your core business concepts is a poison in most cases. If you hard code product names, stakeholder names, and specific business processes into your core software, you’ve locked your business into a bad place. The reality is businesses domains shift regularly, especially in new and evolving companies. Instead, software should focus on business agnostic functional layers. The business domain, as much as possible, should live in configuration and data. Even for the core data models you choose, they should be flexible and business agnostic to support future types of business operation. A good test of the poison is how easy it is to add a new product/service line/workflow/stakeholder/onboard a new customer. DDD creates bespoke work and pain around these business needs.
There are probably a minority of times where you want to lock in your business concepts to your core model and service layer, like in a stagnated business that isn’t changing, or something that’s a universal domain model in that industry. The rest of the time business domains should be configured at a higher level. The linked article makes a good argument about the flexibility of spreadsheets.
The issue with DDD is not in the philosophical concept. In that he is spot on. Things are in hierarchies, everything is described by a combination of two or more patterns that are more fundamental than those below.
The problem is always in people thinking it is an easy magic solution. DDD is not easy. If it were it wouldn’t do anything.
It is in fact very challenging to come up with an ubiquitous language, and to have it evolve as our own understanding evolves.
Organizations looking to use it to make things easier, are doomed to fail. What it can help with is making things good and right and even beautiful in how a problem is solved.
I'd like to complement that. Defining/Discovering the vocabulary for the Ubiquitous Language along with Bounded Contexts (Strategic Design) is where most benefits come from. And is, as said and I agree, the most difficult part - there are no recipes, shortcuts or tools to do that for you.
It doesn't help that there is a bunch of frameworks, libraries, articles with "DDD" on their name, mindlessly gluing together patterns and segregating them into layers (Building Blocks). What I see happening the most is people using them and complaining. Which they should.
Of the two parts that compose DDD, "Strategic Design" and "Building Blocks", only one is essential: Strategic Design. But people usually just talk about the "Building Blocks" (a.k.a Anemic Models), as it seems to be the case of "stevebmark" comment. I agree with him that just having Anemic Models, following blindly the Building Blocks part of DDD as rule, is bad. I just think that calling it DDD is a mistake. DDD is the Strategic Design, you don't even need the code for it to work or produce value.
> The book is one of the most poorly written technical books out there.
This explains the disconnect: it’s not a technical book. The part about patterns is garbage, as is anything which espouses object oriented Java circa 2004 (looking at you, Uncle Bob and Martin Fowler).
As a business book, parts 1 and 3 stand up rather well almost 20 years later. The patterns part (2) wasn’t even good advice when it was written, though.
One of these days perhaps I'll write a meta-pattern on how the software development industry goes through cycles. Every 10 years or so we discard techniques that only gave a 1-2% improvement instead of solving all the problems. We get distracted by a new thing will fix all that. We can't just add to our toolbox, though. First we have to have the ritual sacrifice of blaming all the failures of the previous 10 years on the old techniques, because they didn't solve all the problems.
Oh wait, it's already been written, by Fred Brooks no less. It's called No Silver Bullet.
There are many glaring issues with the ideas presented in that linked "take down" of DDD.
I can forgive the early strawman comparing "spreadsheets" to bespoke software solutions. As if they cover exactly equal problem/solution spaces. Fine. I'll play along...
I can even forgive failing to understand that DDD is a design methodology, not an architecture; That DDD in no way prescribes or enforces any particular code organizational or deployment strategy. Easy to get wrong I suppose, and doesn't necessarily invalidate what could become a coherent argument...
But the author then has to go on and give examples of just how little they understand the topic at hand!
The first points to database software as "an example of good architectural design where their purpose of storing data is not confused with the domain the database will be used for". Oh the irony! Of course that's the case! The domain of database software is... (drumroll) persisting data! What would you expect to see if you opened the source code for an RDBMS? Code for blasting out marketing emails? I could go a step further and opine that it isn't possible to know whether any piece of software follows DDD without actually seeing the design/code base... but I digress. This point is not forgivable. It's a clear and obvious misunderstanding of DDD and how it is applied to systems.
The next examples they give are of applications they worked on! In both cases the author completely misses the fact that the software they "fixed" by decoupling it from the "domain" are simply examples where they followed DDD to create a "better" system. That's right! If the goal of your software is to create a generic "data integration application" or "workflow engine", then yes, coupling your design to "healthcare" is a mistake. Both are examples where the author is confused about what domain their software is servicing, and how aligning the software design to the correct domain was a major improvement. Hmm... sounds like DDD to me :)
I think there are valid criticisms of DDD, but the article linked is quite poor at articulating them.
This confirms my suspicion that "domain" has no definition. If "storing data" is a domain than anything can be a domain, and the word is meaningless.
There are logical groupings of nouns and functionality into services / modules / orthogonal parts of the software. Aggregate root is one useful term here, domain is not.
And for me it further reinforces the point of the article: design functional horizontal layers, not ones locked into your business "domains".
Come on now... You know perfectly well that "storing data" means something different to database software than some shitty LoB app. Let's not be so imprecise as to give off a disingenuous impression. The article makes almost no argument against DDD, and in fact could be a case study supporting the opposite conclusion! The author confuses architecture with design in every important way (similar to your "horizontal layers" comment).
DDD is not about how software is physically organized nor how it is deployed. You can have a traditional N-tier architecture AND follow DDD. The domain model is a logical model used to abstract the functional requirements of a system. It doesn't "lock" you in anymore than whatever else you have in place serving the same purpose. You cannot simply avoid your functional requirements. DDD is a methodology with the specific goal of drawing boundaries (answering "what" goes "where") in such a way to minimize the cost of change. If your resulting design is not doing so, you have simply failed to model your domain in a useful way.
In the author's case, maybe they really did need a generic "data integration" or "workflow engine" application. That's not an unreasonable assumption. And it follows that coupling those kinds of applications to the data/workflows contained therein would lead to all sorts of problems.
But surely he is not arguing that every application should be designed as a platform? It's considerably more difficult to design and maintain a "workflow engine" than "a single workflow", or a "data ingestion" application than to "just ingest the data". Most of the important bits in the above are already abstracted away from users.
But the underlying problem is that engineers are learning the domain as they write the code. This leads them to make invalid assumptions, which can go very deep, and make it extremely difficult to factor out later. If you build a feature based on bad assumptions, it can be really hard to remove or refactor it years later.
As it happens, I have written about 10 billing platforms in my life. If I was to build a new one today, I know exactly how to do it, and it will be super fast, efficient, and cover all the edge cases you can think of (please hire me :). My code structure would look nothing remotely like that written by someone doing this for the first time. They will make a bunch of fundamental mistakes, and most of those mistakes will be conceptual, because how can they know?. The problems range from accounting principles, to the nature of time, to user behaviour, and much more.
And the problem is that they will make the mistakes not because they don’t know how to write software, but because they haven’t spent years coming to understand the nuances and constraints of the billing domain.
No amount of software engineering boilerplate can fix that. In fact, the problem is that software engineering is largely discussed as a homogenous activity, but actually there are a million domains, each of them with different constraints, and because there are a limited number of specialists, we are constantly reinventing the wheel. To use my example, anyone can build a billing system because it seems obvious. But mostly those systems are going to be pretty flakey when they encounter the real world. And this principle applies to just about any domain.