Hacker News new | past | comments | ask | show | jobs | submit login

I don't understand how anemic domain models are irrelevant now especially in a time where a lot of software practitioners seem to observe domain driven design. In the age of microservices, it is very important to determine clear context boundaries. "Logic lives across multiple services" sounds more like a design smell to me. In this case, changing even very simple business requirements could mean changes in multiple parts of the system, which could mean multiple redeployments of many different services. Also implies more complex integration (even e2e) testing.



I'm definitely guilty of using and promoting anemic models. I find that teams have a hard time deciding whether logic is service or domain or whatever and gravitate towards putting everything in one place if domain objects are allowed to be at all smart. On the other hand, people do well with the rule that logic belongs in procedural layers with clear names and some sort of maximum size/complexity.

I also find that these discussions matter less in microservices because the size of each service is so small that you usually don't need more than two layers: one for business logic and one for persistence.

I very much agree that microservices create their own challenges at service interfaces, but since these boundaries are usually expressed as http operations instead of object oriented function calls, I look for different solutions, especially various forms of system-wide introspection and testing.


First, anemic models work well in service contexts. You can't send a business rule from a web page to a web service as part of a data entity. And so such models have a valid place in the world.

Micro services are still services. As such we always validate data crossing a trust boundary - whether monolith or micro service. The rules to do that validation cannot travel with the data.

So guilt because anemic strikes me as... odd. Modern connected apps aren't written as single-tier Smalltalk apps.


You wouldn't send a business rule from a web page, you'd put it in the domain model on the server.

The web page has a UI model, not a domain model.


That's the point. And yet Fowler objects because it violates encapsulation and information hiding, because it requires a service layer and makes a model less expressive.


You can easily serialise a well fleshed out (i.e. non-anemic) object. Does the view layer in this particular example need to know the encapsulated business rule? In my experience, not usually. Otherwise, there's most likely a problem with responsibilities. And when that data comes back from the UI, it's also relatively easy to reconstitute it back to well fleshed out objects.


This I know. The point isn't how one might do it, or whether it can be done¹. The point is that Fowler has issues with anemic objects, even though there are patterns in distrubuted computing best solved by anemic objects.

¹ Note that reconstitution does not do away with anemic objects - reconstitution does not implement logic in a purely object-oriented way, i.e. the logic does not travel, but the data does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: