My general take (and w/ the caveat that every system is different) is as follows:
- procedural code to enter into the system (and perhaps that's all you need)
- object oriented code for domain modeling
- functional code for data structure transformations & some light custom control flow implementation (but not too much)
I like the imperative shell, functional core pattern quite a bit, and focusing on data structures is great advice as well. The anti-OO trend in the industry has been richly earned by the OO architecture astronauts[1], but the idea of gathering a data structure and the operations on that data structure in a single place, with data hiding, is a good one, particularly for domain modeling.
In general I think we are maturing as an industry, recognizing that various approaches have their strengths and weaknesses and a good software engineer can mix and match them when building a successful software project.
There is no silver bullet. If only someone had told us that years ago!
OO code for domain modeling might be, to date, the single greatest source of disillusionment in my career.
There are absolutely use cases where it works very well. GUI toolkits come to mind. But for general line-of-business domain modeling, I keep noticing two big mismatches between the OO paradigm and the problem at hand. First and foremost, allowing subtyping into your business domain model is a trap. The problem is that your business rules are subject to change, you likely have limited or even no control over how they change, and the people who do get to make those decisions don't know and don't care about the Liskov Substitution Principle. In short, using one of the headline features of OOP for business domain modeling exposes you to outsize risk of being forced to start doing it wrong, regardless of your intentions or skill level. (Incidentally, this phenomenon is just a specific example of premature abstraction being the root of all evil.)
And then, second, dynamic dispatch makes it harder for newcomers to figure out the business logic by reading the code. It creates a bit of a catch-22 situation where figuring out which methods will run when - an essential part of understanding how the code behaves - almost requires already knowing how the code works. Not actually, of course, but reading unfamiliar code that uses dynamic dispatch is and advanced skill, and nobody enjoys it. Also, this problem can easily be mitigated with documentation. But that solution is unsatisfying. Just using procedural code and banging out whatever boilerplate you need to get things working using static dispatch creates less additional work than what it takes to write and maintain satisfying documentation for an object-oriented codebase, and comes with the added advantage that it cannot fall out of sync with what the code actually does.
Incidentally, Donald Knuth made a similar observation in his interview in the book Coders at Work. He expressed dissatisfaction with OOP on the grounds that, for the purposes of maintainability, he found code reuse to be less valuable than modifiability and readability.
This is a strong argument against inheritance, but that isn't everything about OOP. Just one well supported advanced abstraction (That I would also argue should be rarely used.)
I would argue that just having strong type system and bundling methods with data gets you the vast majority of the usefulness of OOP. Liskov, Open/Closed, Message Passing, and other theoretical abstractions be damned.
EDIT - Where are the good places to use inheritance?
There are only a few I can think.
One is when you are trying to create a system that inverts dependencies by allowing a plugin system or follows some sort of nuanced workflow that others might want to "hook into". But that isn't the only way to do that, maybe other ways would be better like passing in functors.
Another situation I have seen recently is when creating a kind of data or messages that differ only by type and maybe a few small pieces of behavior and they are all known up front.
> I would argue that just having strong type system and bundling methods with data gets you the vast majority of the usefulness of OOP.
Yes, a module system brings almost all of the advantages of OOP. The one remaining is structure abstraction (things like interfaces on Java derived languages, or type classes on Haskell derived ones).
But well, none of those are even typically associated with OOP. The OOP languages just have those features, like they have variables too.
Yep. Rust has all of these features (modules, structs with associated methods and type classes (traits)). But nobody thinks of it as an OO language. In fact, I’ve heard that many people struggle with rust if they’ve come from a heavily OO language like Java. You have to structure your code a little differently if you don’t have classes.
Modula apparently had many of these features too - and that predated what we now think of as object oriented programming. The good parts of OOP aren’t OOP.
> One is when you are trying to create a system that inverts dependencies by allowing a plugin system or follows some sort of nuanced workflow that others might want to "hook into".
I’m fairly certain that’s the use case of inheritance - at least in the Simula tradition. Classes as a means of lifetime management, moving parts that have well defined steps of operation (methods), and interchangeable parts (subtypes) which you can more or less slot into the larger system (polymorphism).
It’s easier to think about classes not as nouns, but as verbs over time (or rather, bounded by time): at a specific moment in the assembly line, call this particular method, at another moment, call that other method…
Object oriented programming in the Simula tradition I would even go as far as to say is just best practices in structured/procedural programming taken to their logical extremes.
Wrt plugin systems: at least as the class level, are classes really a means of lifetime management in practice?
IIRC, audio plugin APIs follow the shell command pattern of memory management for loading new classes-- the user dynamically loads a library into a running instance of an application, and there it stays until the application exits.
And even if plugin systems as implemented are actually unloading classes, the user is almost always just restarting the app to make sure it took. :)
Agree 100%: static typing (for code completion) + method/data bundling is the major win in OO, and it rarely gets talked about for whatever reason.
It's unfortunate that inheritance became such a major focus of practical OO languages. Would love to see a composition-first OO language. Might have its own problems, but would at least be interesting.
Go, Rust, Zig, etc all support static typing and method/data bundling without any explicit language support for implementation inheritance (interface inheritance in general and especially when structural rather than nominal is not nearly as much of an issue and doesn't create strict tree hierarchies).
Rust has support for variance and subtupint so perhaps it's not as pure of an example, but it's pretty heavily restricted.
Zig's support for method/data bundling being used for "objects" isn't even first class so I wouldn't call it OO (object-oriented) so much as object-orientation-capable with less fuss than if one wanted to build their own objects system in C.
Even in C++ the last time I thought I might need inheritance I made a simple class/struct with a few members that were `std::function` instances. Instead of needing inheritance this worked and I managed to keep type safety checks on all function return and parameter types. Once upon a time this would have been weird function pointers and `void*` with dangerous casts. Last month when I did it, there were just lambdas passed to typesafe constructors.
Go's first class support for typed return tuples and Interfaces is a lovely replacement for inheritance (E.G. an Interface of type blah supports this signature). They function as an API contract, if a given class implements the requirements of the Interface, it can be cast to and used as that anywhere which accepts that interface.
It's not unfortunate happenstance, it's by definition.
Dynamic dispatch is the defining feature of object-oriented programming. In dynamically typed languages such as Smalltalk, you can get there with duck typing. But a statically typed language needs a statically typed mechanism for dynamic dispatch, and that requires some way of saying, "Y is a particular kind of X, so all members of X are also in Y." Which is - again by definition - inheritance.
You could remove - or refuse to use - the inheritance (or, equivalently for some purposes, duck typing). But that would also prevent the use of dynamic dispatch, so what you're doing would bee be procedural programming, not OOP, even if you're using an object-oriented language to do it.
> Dynamic dispatch is the defining feature of object-oriented programming.
Message passing is the defining feature of object-oriented programming. Dynamic dispatch can be achieved using message passing, but message passing is more than dynamic dispatch.
Ultimately, static typing is incongruent with object-oriented programming. Messages are able to be invented at runtime, so it is impossible to apply types statically. At best you can have an Objective-C-like situation where you can statically type the non-OO parts of the language, while still allowing the messages to evade the type system.
Whether you'd call "composition-first" is probably asking for a big argument about what "composition first" really means, but Go is certainly a language that syntactically privileges a particular type of composition over inheritance. It doesn't even have syntax for inheritance, and frankly even manually implementing it is rather a pain (best I've ever done requires you to pass the "object" as a separate parameter to every method call... and, yes, I said that correctly, to every method call).
I'm not ready to try to stake a position on the top of some "composition first" hill because the syntactic composition it supports is not something I use all the time. It's an occasional convenience more than a fundamental primitive in the language, the way inheritance is in inheritance-based languages. Most of the composition is just done through methods that happen to use in composed-in values, but it is generally not particularly supported by syntax.
Inheritance is just plain a great way to model a lot of relationships, in my experience, because a lot of things are most easily thought of as "x is a kind of y". I am perennially baffled that people shit on inheritance so much, because I think it's incredibly useful. I find myself often missing inheritance when working in Rust, for example.
Implementation inheritance often leads to code that is just awful to read. If class C extends class B and class B extends class A, then to find out what `new C().foo()` actually does, you need to read through the whole C-B-A hierarchy, bottom to top. If `A.foo()` calls `this.bar()`, you have to start again, from the bottom of the hierarchy. With an inheritance hierarchy of depth n, every method call could be going any of n different places. With an interface, there's a single level of indirection. With composition, the code simply tells you what happens next.
If class A and class B both implement interface X, and B wants to borrow code from A, it should just call A's methods—ideally, static methods, but B can keep an instance of A if it wants. Explicit is better than implicit.
Also, I dislike ontological statements like "x is a kind of y." What does that mean? Typically, it's a claim about behaviour: "x offers method w and satisfies invariant v". But the actual blueprint here is an interface, (w,v)—not another object y. The waters get even muddier when we start talking about "is-a" vs "has-a" relationships. It feels like OOP is trying to unhelpfully distance us from what's actually going on with our code. Under the hood, inheritance is no more than syntactic sugar for composition. I think that OOP's focus on the ontological philosophy of inheritance is the reason why it led to so much bad AbstractObserverStrategyFactory-style code.
> dynamic dispatch makes it harder for newcomers to figure out the business logic by reading the code
This is definitely a potential problem, but I note that you can also get into this mess without OO in any language that lets you put a (reference to) function in a variable. Or, god help you, operator overloading.
If I overload shift-left `<<` for a completely different concept such as "piping", that's my mistake. That's like writing a normal function or method and calling it `foo()` when it has nothing to do with the concept of fooing.
That said, unless you are writing a math library or some container, there's not many good uses for operator overloading.
I think the main difference between the two is, as someone reading and debugging the code, will probably eventually check even those methods that I assume I know roughly what they do. In contrast, I may not even think to check an overloaded operator unless I _already know_ that it's overloaded.
Maybe a good analogous method would be an overloaded `.ToString()` in C# that has side effects or returns the full text of the Magna Carta or something.
Custom operators of any kind are definitely a problem for learners. I think people just fixate on overloading because that's the only kind of operator customization available in the most popular languages.
The particular problem is that search engines tend to have terrible support for searching for arbitrary sequences of non-alphabetic characters.
ConsoleLogger is a Logger because they share a method (log),
PaidUser and User can have some common things, but I don't think it's only in the way it behaves, but also in the way you contact/use them
But, in a way, OO modeling and design was invented to solve the mess that "banging out" procedural code created in the first place.
You have to model your business domain in software one way or another anyway. Why should it be bad to try to be more methodical about it using OO methods? We do it with relational databases all the time where tables are pretty similar to objects.
I actually have nothing against objects and methods, but that’s a very limited subset of OO. I prefer to use algebraic data types for domain modeling, and giving them methods is totally fine too. But I do prefer them to be immutable in most cases, which is also quite counterintuitive from an OO perspective.
The oscillation between the two as what's in favour is also humorous.
The right thing for the right need for the present and near future, especially the newer the codebase, and the greater the need to learn, is often the way to consider pursuit.
Software exists in the real world, and is used to solve real world problem. In building software we inevitably invent or use abstractions to represent or effect real world things. Abstractions that make it easier to do this are good, abstractions that make it harder to do this are just getting in the way.
> In building software we inevitably invent or use abstractions to represent or effect real world things.
Ehhh. Most abstractions I’ve written aren’t abstractions over the real world. They’re abstractions over low level machinery of a computer or program. (Eg there’s no HTTP request and response, DB connection or network socket outside of computer software).
The real world isn’t object oriented. It’s just a bunch of atoms moving around. You can describe physical reality as a bunch of objects that interact via ownership and method calls, but there’s nothing natural about that. OO is no better of a way to describe the real world than actors & message passing, or state & events.
Software that models “the real world” usually describes users, money, addresses and things like that. But none of those things are made out of atoms. There is no money molecule. Money is not on the periodic table. They’re all just another form of abstraction - one that happens to exist outside the software world, that we can capture in part in a database table.
Interesting abstractions are all invented ideas. Some are useful. Some are elegant to express and use in OO code, and some are not.
1 - (Application) System exist in the real world, not software. Software exists in machines.
2 - Computing is used to solve real world problem.
3 - "In building software we inevitably invent or use abstractions to represent or effect real world things." Here is the problem where we part company.
4 - Abstractions that inform computing systems are indeed useful.
[edit+ps]
self disclosure: I've reached 'architectural orbit' numerous times in my career. 30 years later, I am sharing a subtle point. Effective software models cutout attributes of real world elements of the problem domain. All attempt to "model the world" end in tears.
For me, software and tech that is for someone, exists to work for people, who are end users.
End users and customers don't exist to serve at the leisure and pleasure of software and it's creators.
Making people work harder than they need to operate software is selfish.
DevOps and DevEx is important, but if no one uses it with those being great, the Customer and their experience are often lost and never gained.
Learning to model something flexible enough for absorbing and quickly implementing the early customer feedback that is relevant is critical to boring things like retention.
Helping customers earn enough to eat every month, helps the tool makers earn enough to eat every month.
If we're talking about IT (information processing in general), then the domain model is just data representing facts and should probably be treated as that, and not some metaphorical simulation of the world.
I've come up with a pretty useful test for when to apply OO:
When you need to model a _computational unit_[0] in terms of _operational semantics_, then use OO.
[0] Decidedly _not_ a simulation of a metaphor for the "real world".
---
Examples:
A resizable buffer: You want operations like adding, removing, preemptively resizing etc. on a buffer. It's useless to think of the internal bookkeeping of a buffer that is represented in its data structure when you use it.
A database object: It wraps a driver, a connection pool etc. From the outside you want to configure it at the start, then you want to interact with it via operations.
A HTTP server: You send messages to it via HTTP methods, you don't care about it's internal state, but only about your current representation of it (HATEOAS) and what you can do with it.
A memory allocator: The name gives away that you can _do_ things with it. You first choose the allocator that fits your needs, but then you _operate_ on it via alloc/free etc.
---
Some of us wince when we hear "OO", because it has been an overused paradigm. Some advocates of OO have been telling us that it is somehow total (similar to FP advocates) and people have been pushing back on this for a while now.
When applied to information processing especially, it becomes ridiculous, complex and distracting. I call this "Kindergarten OO": You to write code as if you explain the problem to a child via metaphors.
Computational objects however arise naturally and are very obvious. I don't care if those are encoded as classes, with closures or if we syntactically pretend as if they aren't objects. They are still objects.
The syntax has improved quite substantially since 2016 for pattern matching at C#'s end, and it's very easy to model ADTs with records (and the experience of using them even before that was decent with methods accepting lambdas).
Today, you write it in a similar way you would write a match in Rust.
The other commenter is not technically wrong to point at “algebraic data types”, but I don’t think that answer is helpful at all. It’s like saying the answer to data modelling is tuples.
I would instead recommend searching for “functional programming and domain driven design”.
> but the idea of gathering a data structure and the operations on that data structure in a single place, with data hiding, is a good one, particularly for domain modeling.
One can do this in a module without OOP.
The idea of mixing data and behaviour/state (OOP) instead of keeping data structure and functions transforming those (functional) is IMO the biggest mistake of OOP, together with using inheritance.
I believe making part of the program data instead of code (and thus, empty of bugs) is such a big advantage. Already lisp was talking about it. Mixing data with behaviour, without a clear delimitation creates a tight-coupled implementation full of implicit assumptions. Outside the class things are clean, but inside they ossify, and grow in complexity. Pure functions with data in data out are such a big improvement in clarity when possible.
I hadn't thought about it explicitly like this before, and I think I agree. My more nebulous thought process was something like:
1. Try to solve the problem purely functionally.
2. If that failed because of data issue, model the data with objects and simple operations in an OOP style or well thought collection of arrays (game devs answer to OOP causing memory and caching problems, but the end result programming is similar to OOP thought process)
3. If that is can't happen because some external restriction is impose use the minimal amount of procedural logic to solve the problem and round off as many sharp corners as is practical until it is unlikely any on the team gets cut.
Logically that is very close to inversion of thought process and ordering of operations to what you suggested. But I think we would recognize each others attempts to pick a design paradigm in code.
Now I want to think about this more. Is there some underlying principle here? Is this some kind of underlying principle? Where do domain specific languages fit in? Do other paradigms fit in? What are the bounds of this pattern, where does this process fail?
Look, I'm going to catch flak for this but at the end of the day the main problem is that Java and C++, the most popular OOP languages, are just bad programming languages.
There are OOP languages out there, most of them older than Java and C++, that actually provide a much better set of knobs and handles for writing sane OO programs.
Java is finally getting a bit better thanks to a lot of market pressure and good ideas from Kotlin. C++ will probably be a mess forever.
That's the strategy I always take when designing a system. Funny that I never thought about it before, I think most PHP developers will relate to that aswell.
- Procedural single point entrance into the system (network -> public/index.php, cli -> bin/console)
- OOP core for business logic, heavily borrowed (copied) from Java OOP model
> These things might be good architectures, they will certainly benefit the developers that use them, but they are not, I repeat, not, a good substitute for the messiah riding his white ass into Jerusalem, or world peace. No, Microsoft, computers are not suddenly going to start reading our minds and doing what we want automatically just because everyone in the world has to have a Passport account.
When you write a procedure that has to maintain an internal state between calls, changing it into a class makes sense. As for the name, you change the verb (write) into a noun (writer), and you now have a name for the class.
C# will silently create hidden closure classes for you when you use lambdas or yield.
Just know that if you do this, you’re injecting statefulness in the center of wherever it was this procedure was being used. If your entire system already has statefulness everywhere, nobody will bet an eye. But if you want to have any chance at creating a functional core or island, it’s the opposite of what you should be doing.
When you write a procedure that has to maintain an internal state between calls, stopping what you're doing and switching to functional programming makes sense.
James Gosling, who I'd consider the father of one of the most popular OO languages gave this advice:
"You should avoid implementation inheritance whenever possible"
My early days of Java where largely building unmaintainable inheritance trees into my code and then regretting it. This quote gave me comfort that it wasn't really that good an idea.
Although I agree with the recommendations, I cringe at the definition of abstraction. In a sane world, abstraction doesn't mean defining classes so much as it means identifying important unifying concepts. DRYing your code by moving a method to a common base class isn't abstraction in any important way, it's just adding a level of indirection. In fact, I'd argue that this example is the opposite of abstraction: it's concretion. Now every subclass relies implicitly on a particular implementation of that shared method. Not that doing this is never useful, but it's a mistake to call it abstraction when it's nothing of the sort. No wonder people complain that their abstractions leak.
I'm currently dealing with a codebase that does this to a ridiculous extent. Like, literally, every change affects the entire project because everything is made of base-classes mixed in weird ways. Every concrete object inherits multiple base classes and no individual behavior. Imagine something like this:
class Book extends ShelfableItem, Pagable, Authored, Readable, BaseBook {}
Even if they were interfaces, it screams of the "model your code after physical objects" approach, where a system has 1 enormous "Book" type which represents all the things you can do with a physical book.
It seems unlikely that the same type should be "Shelfable" and "Readable" / "Pagable," because they describe distinct sets of operations. When a book is on a shelf, you can't page through it. If you "read" a book on a shelf, you only see the title, author, and maybe some pull quotes.
It depends. Of course we don’t see the whole picture because it’s just an example by OP, but I also find weird that the de facto solution to abstract classes is: interfaces. Sometimes, duplication is better than interfaces.
Yes. Did I mention there are interfaces too, with almost the same name, but it’s only used for the base classes. (Yes, there is only one implementation of all interfaces).
Having lots of interfaces for common things is not a bad thing. See how Rust traits work... even basic structs you create will probably implement lots of basic traits (some of which can be done automatically, thankfully) like `Display`, `Default`, several `From` or `Into` impls, `Clone`, `Copy` if your type is "light", `AsRef`, `Send` and many more!
This makes code much more reusable as so many functions are written based on those basic traits alone.
Of course, finding the right basic types is really hard and your company seems to have done that badly, but in principle, having some basic types to model very common "things" is a necessary thing.
The issue isn't the interfaces; it's that there is only one implementation (the base classes) per interface—so why even bother having an interface? Otherwise, I agree with you to a degree.
The main issue with the codebase is that if you want to, say, change the behavior of a Book, you have to go change the behavior of some base class (after working out which one is actually being called). This base class might be used in a Clock, Field, or Filesystem as well—something so conceivably far away that their similar behavior is a coincidence and not really related at all. Then you get to argue with the architect about whether "reading a book" is different than "reading a clock."
If by abstraction you mean identifying unifying concepts then I cant understand how you reasoned yourself into thinking that identifying a common method and sharing it between multiple classes by the means of the super class is not abstraction. You have identified a commonality - the common code, common method. By your definition it's abstraction.
It's not abstraction because it's not presenting a simpler mental model. It's just shoving some code located somewhere else into scope. It's mere indirection.
If I said 'User', we both know exactly what that means. It's so semantically simple that laypeople know what it means. But our implementations could vary wildly. Someone who's just taken Java 101 will be thinking of a class with getName() and setName(). But someone who's just taken SQL 101 will think of a User as an INT or UUID, where features are added by referencing that user's id from different tables. User is abstract because it's understandable and not locked into any particular implementation.
I love Kafka. But it's a PITA to program against, at least in Java. I cannot code directly against it and always need to make my own wrapper classes to construct and poll it. I'll make ResumingKafkaReader and RewindingKafkaConsumerFactory, etc. These are not abstract, because they are very specific about what and how they do things. They are concrete behaviours wrapped with 1-2 levels of concrete indirection.
However, I might inject one of my Kafka indirections into a business logic class, interfaced as a Supplier<User>, which makes it abstract. I can then unit test my class, safe in the knowledge that my class cannot know if a User came from Kafka or just a test stub.
So I push back on the thesis of the article, and double-down on doing things abstractly first and foremost. This is closely related to the dependency inversion principle. Write (and test) your business classes around Users and other abstract things. Once you've done it wrong a few times and eventually gotten it right, then you can start writing the indirections (e.g. AbstractKafkaFactory) which the article rightly claims slow you down in the beginning.
I came to say more-or-less the same thing. The author is making some valid points, but the moral is that premature reification of abstract concepts may be harmful, especially if there is something vague about them.
I had to mull over this for a while, but I think I agree - abstractions at a conceptual level are much more powerful than object-level "compression".
Concepts/domain model/whatever tend to change over time though (at least in the business world, maybe not so much tooling etc). I think that's another source of leaky abstractions - things that conceptually made sense together at one point grow apart, and now you're left with common code that is deeply integrated but doesn't quite fit any more.
Gall’s law: “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”
Your theory of premature architecture reinforces Gall’s law.
This is from the book Systemantics: How systems work and especially how they fail (1977).
Anyone who wants to do a deep dive into understanding effective abstractions, I highly recommend SICP. The full book[0] and lectures[1] are available online for free. You don't have to know Scheme to follow along.
I like to start with a fairly unambitious bit of procedural code and gradually introduce abstractions when it starts to get complicated or repetitious.
Straight code becomes functions, occasionally a group of functions cry out to become a class.
In C++ this is a huge effort to do - change hurts more there. In python it's much less painful and I end up with a program that is some imperfect composite of object oriented with functions. Next week when I want it to do more I move it further down the road to structure.
I also like keeping side effects in their own ghettos and extracting everything else out of those if possible but I'm not a big functional programming person - it's just about testing. Testing things with side effects is a pain.
I find that JS/TS also lends itself towards this in terms of Node/Deno/Bun usage for apps. You can have a file/module that simply exports a function, a collection of functions, a class, etc. It's easy to keep it simple and then combine with a mix of procedural, functional and oo concepts as best fits the use case.
Yes, I'm doing that a lot, too. I'm often astounded how hostile some languages are to later changes - e.g. java always feels resistant to change, while dotnet and especially python are more amenable. I.e. I totally transformed a program from function to oo in python without much sweat - would have been a total pain in dotnet or java
Difficult on mobile and I'm without access to k PC ATM.
Basically, my predecessor hat build a python program which had N modules which where applied to a data structure - maybe think of correction steps. First spell checking, than turn manual headlines into actual headlines etc.
Originally, they operated on a global data structure, and every new module required calling it on the global structure, so extension was difficult. Worse, every module hat internal state, e.g. the number of spelling mistakes. Reporting these things at the end was cumbersome. There where "required" functions in each module, but it was difficult to newcomers to discover who they where.
We changed it so that every step was a class which inherited from a base class, so adding a new step was as easy as inheriting from the base class. Furthermore, we added Auto discovery, so just adding the class in a module was sufficient to get it executed.
Wonderful write-up. One way I really try to avoid premature abstractions is co-locating code wherever it is used. If you don't try to put everything in a shared lib or utils, you keep the surface area small. Putting a function in the same file (or same folder) makes it clear that it is meant to be used in that folder only. You have to be diligent about your imports though and be sure you don't have some crazy relative paths. Then, if you find yourself with that same function in lots of places in your code, you might have an stumbled upon an abstraction. That's when you put it in the shared lib or utils folder. But maybe not, maybe that abstraction should stay in a nested folder because it is specifically used for a subset of problems. Again, that's to avoid over-abstracting. If you are only using it for 3 use cases that are all within the same parent folder path (just different sub-folders), then only co-locate it as far up in the file tree as is absolutely necessary for keeping the import simple. Again, it requires due diligence, but the compartmentalization of the folder structure feels elegant in its simplicity.
I'm not "formally" trained in software engineering and am primarily self-taught. This area, in particular, has been confusing to me over the years, especially after consuming so many contradictory blog posts.
I tried to model DDD in a recent Golang project and am mostly unhappy with the result. Perhaps in my eagerness, I fell into the trap of premature abstraction, but there's not anything in particular that I can point to that might lead to that conclusion. It just feels like the overall code base is more challenging to read and has a ton of indirection. The feeling is made worse when I consider how much extra time I spent trying to be authentic in implementing it.
Now, I'm starting a new project, and I'm left in this uncertain state, not knowing what to do. Is there a fine balance? How is it achieved? I appreciate the author's attempt at bringing clarity, but I honestly walked away feeling even more confident that I don't understand how this all works out.
The one point of DDD is that you make a dictionary of all of the domain terms and get everybody to accept them.
You can cut those 4 pages and throw the rest of the book away. But it is one of the best books on software engineering, and only gets better once you do that.
> I'm not "formally" trained in software engineering and am primarily self-taught
Welcome to the club and I wouldn't be too worried about it (but definitely read and learn what others have figured out).
Software design and development is still an unsolved problem. The industry has not collectively found a foundational set of standard practices that apply across the board other than some of the most basic (e.g. organization is good).
You can tell that it's not solved by the relentless flow of industry trends that become the new "best practice" until some years later when we figure out "well, that approach has these pros and these cons and tends to fit with these types of problems, but definitely not a silver bullet, let's try the next thing"
Regarding your specific issue on your new project: just be pragmatic, get it working and learn from your decisions, it's all just a collection of pros and cons and the analysis of pro vs con changes depending on the angle you look at it (e.g. short term vs long term, slow changing environment vs fast changing environment, cost to value ratio, etc., etc., etc.)
I think there's plenty of good advice in this post, though the OP doesn't talk as much about the evils of premature abstraction as one might like. Still, they do talk about how to avoid it using reasonable programming guidelines.
In the talk about data structures, I was reminded of Fred Brooks quote from MMM: "Show me your flowcharts, and conceal your table, and I shall continue to be mystified; show me your tables and and I won't usually need your flowchart; it'll be obvious." Several people have translated it to something like "Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won't usually need your code; it'll be obvious," for a modern audience.
Several years ago I was happy to work with several people with an interest in philosophical history. We whiled away the hours thinking about whether these quotes represented something of the tension between Hericlitus (You cannot step into the same river twice) and Plato (everything is form and substance.) So... I think the observation about the alternating utility of form and function is an old one.
As for Heraclitus vs. Plato, I think the lesson I’m trying to teach is to not pick a side until you understand each position’s implications and which of those might be more beneficial to the problem at hand ;)
A well chosen abstraction dramatically simplifies code and understanding, and a poorly chosen abstraction has the opposite effect.
When building a system some choices about abstractions can be made early, before the problem domain is fully understood. Sometimes they stand the test of time, other times they need to be reworked. Being aware of this and mindful of the importance of good abstractions is key to good system design.
In all seriousness though, you do hit a great point. The moment you stop being embarrassed about your mistakes and set your ego aside, is the moment that you can truly start learning from those same mistakes. At some point it even becomes the only way you can move forward, unless you want to stay boxed inside a niche of expertise defined by your own self-set boundaries.
> /// BAD: This mutates `input`, which may be unexpected by the caller.
> [...]
> /// GOOD: `input` is preserved and a new object is created for the output.
Neither of these are good or bad without knowing their context and understanding their tradeoffs. In particular, sometimes you want to mutate an existing object instead of duplicating it, especially if it's a big object that takes awhile to duplicate.
> Post-Architecture is a method of defining architecture incrementally, rather than designing it upfront
For anyone else wondering what it means.
I'm going to be honest, almost all architecture I've seen out in the wild has followed a more incremental approach. But then again everywhere I've worked hasn't separated the architecture/coding roles.
If you work with C# or Java in a lot of places, such as Banking in particular, you'll definitely see a lot of up-front architecture and excess abstractions early on.
>> Often, an abstraction doesn’t truly hide the data structures underneath, but it is bound by the limitations of the initial data structure(s) used to implement it. You want to refactor and use a new data structure? Chances are you need a new abstraction.
There's no greater joy in life than jumping through an abstract object, an object interface, and a factory method only to find out that the factory only services one object.
In my experience, I've always found the devil to be in the [late] details.
I have learned (the hard way), that, no matter how far I go down the rabbithole, in architecture, I am never a match for Reality.
I. Just. Can't. Plan. For. Everything.
I've learned to embrace the suck, so to speak. I admit that I don't know how things will turn out, once I get into the bush, so I try to design flexible architectures.
Flexibility sometimes comes as abstractions; forming natural "pivot points," but it can also come from leaving some stuff to be "dealt with later," and segmenting the architecture well enough to allow these to be truly autonomous. That can mean a modular architecture, with whitebox "APIs," between components.
People are correct in saying that OO can lead to insane codeballs, but I find that this is usually because someone thought they had something that would work for everything, and slammed into some devilish details, down the road, and didn't want to undo the work they already did.
I have learned to be willing to throw out weeks worth of work, if I determine the architecture was a bad bet. It happens a lot less, these days, than it used to. Hurts like hell, but I've found that it is often beneficial to do stuff I don't want to do. A couple of years ago, I threw out an almost-complete app, because it was too far off the beam. The rewrite works great, and has been shipping since January.
Anyway, I have my experience, and it often seems to be very different from that of others. I tend to be the one going back into my code, months, or years, after I wrote it, so I've learned to leave stuff that I like; not what someone else says it should be like, or that lets me tick off a square in Buzzword Bingo.
My stuff works, ships, and lasts (sometimes, for decades).
I know HN doesn't like quibbles about site design, but I'm literally having difficulty reading the article due to the font size being forced to be at least 1.3vw. Zooming out doesn't decrease the font size! Downvote if this is boring, but (a) I've never seen a site that did that before, so it's just notable from a "Daily WTF" kind of perspective, and (b) just in case the submitter is on HN: it's actually preventing me from reading the content (without changing it in DevTools anyway).
I would love a language that has this gradual evolutional abstracting as a core concern. That makes it easy. Where you can start from simplest imperative code and easily abstract it as the need for this arises.
For example a language that requires "this." or "self." prefix is not such language because you can't easily turn a script or a function into a method of some object.
> I would love a language that has this gradual evolutional abstracting as a core concern. That makes it easy. Where you can start from simplest imperative code and easily abstract it as the need for this arises.
This is about how I write Clojure.
I start out with some code that does the thing I want. Either effectfull code that "does the thing" or functions from data to data.
After a while, I feel like I'm missing a domain operation or two. At that point I've got an idea about what kind of abstraction I'm missing.
Rafael Dittwald describes the process of looking for domain operations and domain entities nicely here:
Here’s the scenario, hot shot intern comes in, calls a meeting to use generics so things can be done “easier”. He does a good job at presenting it and its value, the dumbass team lead oks it. Fast forward 1 week everyone complains behind on how much pain it is to use
I used to read and write a lot of Scala and a bit of Haskell code which claims to be very similar to what I think you're mentioning here. There people also start with defining the domain in interfaces (algebras, eDSLs) and data types.
In the end it's still the same indirection and abstraction as in any other Java or Go codebase, and it prevents the developer from easily accessing the actual logic of the program.
What I find difficult to understand here is that the logic of the program, in the case of a Haskell-like language, is encoded in the types.
I don’t need to look at the definition of a Monoid instance for a type (unless in rare cases it matter for some reason). I know there’s an identity element of the type and there’s a binary relation that combines elements of the type. Any type that’s a lawful Monoid works the same way.
And it goes up from there.
Denotational design is significantly different from an operational design. It’s not surprising that most programmers are taught and tend to think in terms of how computations are carried out instead of what computations are needed and the ways we can compose them. I still struggle with it at times.
I classify it separately from “indirection,” because of the laws that govern a design process like that. The same rules used in algebra work in programs where you can rely on being able to substitute terms for symbols representing those terms. And algebra seems to have been a rather successful language for manipulating expressions.
Where it does break down though is at the edges where we need to interact with run time exceptions, the state of resources external to the process, etc. Even in Haskell you can write your code procedurally if you want. You won’t get the benefits of denotational design but at least you’ll still have a pretty decent type system that helps you with refactoring and extracting “logic” later on into something more understandable and easier to work with.
> at the edges where we need to interact with run time exceptions, the state of resources external to the process, etc. Even in Haskell you can write your code procedurally if you want. You won’t get the benefits of denotational design
I'd go further and say that effect systems, MTL etc. implement denotational design for procedural code. You can say exactly what your operations denote: a procedure that can throw exceptions only, update state only, a combination of both but not I/O, etc. etc.
The discussion of procedural code doesn't make sense to me, because it seems to mix together some orthogonal concepts.
Procedural is not the opposite of object-oriented (nor is it particularly contrasting); idiomatic OOP is procedural to a large degree. Effective functional programming happens when you ditch the procedural approach in favour of a more declarative approach.
Though I agree about the point about not creating objects/instances where a pure function will get the job done, I disagree with the general stance against OOP. I think OOP is absolutely essential to simplicity. FP tends to lead to too many indirections with data traversing too many conceptual boundaries. FP (if used dogmatically) tends to encourage low cohesion. I want high cohesion and loose coupling. Some degree of co-location of state and logic is important since that affects cohesion and coupling of my modules.
The key to good OOP is to aim to only pass simple primitives or simple cloned objects as arguments to methods/functions. 'Spooky action at a distance' is really the only major issue with 'OOP' and it can be easily solved by simple pass-by-value function signatures. So really, it's not a fundamental issue with OOP itself. OOP doesn't demand pass by reference. Alan Kay emphasized messaging; which is more leaning on 'pass by value'; a message is information, not an object. We shouldn't throw out the baby with the bathwater.
When I catch a taxi, do I have to provide the taxi driver with a jerrycan full of petrol and a steering wheel? No. I just give the taxi driver the message of where I want to go. The taxi driver is responsible for the state of his car. I give him a message, not objects.
If I have to give a taxi driver a jerrycan full of petrol, that's the definition of a leaky abstraction... Possibly literally in this case.
That said, I generally agree with this article. That's why I tend to write everything in 1 file at the beginning and wait for the file size to become a problem before breaking things up.
There are many different ways to slice things up and if you don't have a complete understanding of your business domain and possible future requirement changes, there is no way you will come up with the best abstractions and it's going to cost you dearly in the medium and long term.
A lot of developers throw their arms up and say stuff like "We cannot anticipate future requirement changes"... Well of course, not on day 1 of your new system!!! You shouldn't be creating complex abstractions from the beginning when you haven't fully absorbed the problem domain. You're locking yourself into anti-patterns and lots of busy-work by forcing yourself to constantly re-imagine your flawed original vision. It's easier to come up with a good vision for the future if you do it from scratch without conceptual baggage. Otherwise, you're just seeding bias into the project. Once you have absorbed it, you will see, you CAN predict many possible requirement changes. It will impact your architecture positively.
Coming up with good abstractions is really difficult. It's not about intelligence because even top engineers working in big tech struggle with it. Most of the working code we see is spaghetti.
Thanks! I would just like to clarify I’m actually not opposed to OOP at all, and at several points I tell people it’s fine to go in that direction for the problems where you need it. I do try to warn against it as a go-to solution before you’ve understood what are the problems that actually need fixing, which it sounds like we’re pretty aligned on.
Indeed if you pass by value/use immutability where feasible, you already avoid most of the issues I’m warning against, so it sounds like you found a sensible way to apply it while avoiding the pitfalls.
> If I have to give a taxi driver a jerrycan full of petrol, that's the definition of a leaky abstraction... Possibly literally in this case.
My view on FP is that one of its main benefits in terms of avoiding bugs is that it forces you to turn all your 'movable state' into raw messages. The fact that it also abolishes 'unmovable state' is just a quirk of it, not necessarily a benefit.
My main point in support of OOP is that OOP doesn't prevent you from making all your 'movable state' into raw messages too. Properly encapsulated instance state is not dangerous. What I like about OOP is that it offers some additional benefits in terms of high cohesion and loose coupling because co-locating logic with related state helps to write high cohesion, loosely coupled modules. It reduces the amount and complexity of state that needs to be transferred between modules. It allows my messages and function/method signatures to be even leaner than FP allows.
If you want to catch a taxi to the airport, FP still requires that you bring a jerrycan of fuel and steering wheel to give to your taxi driver, the only restriction is that he cannot alter them (no mutations). The benefit of this is that you can fully trust that the integrity of your jerrycan and steering wheel has been maintained after the trip and you can then confidently re-use them for your airplane as well... Anyway, as elegant as that seems in theory, it's not quite how the world works.
> The real problem with the class Foo above is that it is utterly and entirely unnecessary
I see this sentiment a _lot_ in anti-OO rants, and the problem is that the ranter is missing the point of OO _entirely_. Hard to fault them, since missing the point of OO entirely is pretty common but... if you're creating classes as dumb-data wrappers and reflexively creating getters and setters for all of your private variables then yes what you're doing _is_ utterly and entirely unnecessary, but you're not doing object-oriented design at all. The idea, all the way back to the creation of OO, was to expose actions and hide data. If you're adding a lot of syntax just to turn around and expose your data, you're just doing procedural programming with a redundant syntax.
As someone who dies a lot of python, TS, dotnet and java - I disagree. The problem of dotnet and java is that everything is a object. And for many cases, I don't need that object at all, it can be a static class - but honestly, the python concept of a module fits a lot better. It's a grouping of functions in a module, not a class holding functions.
I've been a software eng professionally for 25 years. Have been coding more like 30 - 35. There is a fundamental principal here that I agree with and it surrounds a code smell that Martin Fowler termed "Speculative Generality" in his book "Refactoring."
Speculative Generality is when you don't know what will have to change in the future and so you abstract literally everything and make as many things "generic" as you possibly can in the chance that one of those generic abstractions may prove useful. The result is a confusing mess of unnecessary abstractions that adds complexity.
However, yet again I find myself staring at a reactionary post. If developers get themselves into trouble through speculative generality, then the answer is clearly "Primitive Obsession" (another code smell identified in "Refactoring") right?
Primitive Obsession is the polar opposite of abstraction. It dispenses with the introduction of high-level APIs that make working with code intuitive, and instead insists on working with native primitive types directly. Primitive Obsession often comes from a well meaning initiative to not "abstract prematurely." Why create a "Money" class when you can just store your currency figure in an integer? Why create a "PersonName" class when you can just pass strings around? If you're working in a language that supports classes and functions, why create a class to group common logical operations around a single data structure when you can instead introduce functions even if they take more parameters and could potentially lead to other problems such as "Shotgun Surgery."
This is not to say that the author is wrong or that one should embrace "premature abstraction." Only that I see a lot of reactionary thinking in software engineering. Most paradigms that we have today were developed in order to solve a very real problem around complexity at the time. Without understanding what that complexity was, historically, you are doomed to repeat the mistakes that the thinkers at the time were trying to address.
And of course, those iterations introduced new problems. Premature Abstraction IS a "foot gun." What software engineers need to remember is that the point of Design Patterns, the point of Abstractions, the point of High-Level languages and API design is to SIMPLIFY.
One term we hear a lot, that I have been on the war path against for the past decade or two is "over engineering." As engineers, part of our jobs is to find the simplest solution to a given problem. If, in your inappropriate use of a given design pattern or abstraction, you end up making something unnecessarily complicated, you did not "over engineer" it. You engaged in BAD engineering.
When it comes to abstractions, like anything else, the key to gain the experience needed to understand a) why abstractions are useful b) when abstractions can introduce complexity and then apply that to a prediction of what will likely benefit from abstraction because it is something that will be very difficult to change later.
All software changes. That's the nature of software and why software exists in the first place. Change is the strength of software but also a source of complexity. The challenge of writing code comes from change management. Being able to identify which areas of your code are going to be very difficult to change later, and to find strategies for facilitating that change.
Premature Abstraction throws abstractions at everything, even things that are unlikely to change, without the recognition that doing so makes the code more complex not less. Primitive Obsession says "we can always abstract this later if we need to" when in some situations, that will prove impossible(ex: integrating with and coupling to a 3rd party vendor; a form of "vendor lock-in" through code that is often seen).
Fine blog post overall, but the author fell to premature abstraction themselves in declaring that little Foo class bad. It's entirely too generalized for me to say anything negative about at all. Depending on the context, a tiny class like that could be completely sensible or utterly unnecessary.
- procedural code to enter into the system (and perhaps that's all you need)
- object oriented code for domain modeling
- functional code for data structure transformations & some light custom control flow implementation (but not too much)
I like the imperative shell, functional core pattern quite a bit, and focusing on data structures is great advice as well. The anti-OO trend in the industry has been richly earned by the OO architecture astronauts[1], but the idea of gathering a data structure and the operations on that data structure in a single place, with data hiding, is a good one, particularly for domain modeling.
In general I think we are maturing as an industry, recognizing that various approaches have their strengths and weaknesses and a good software engineer can mix and match them when building a successful software project.
There is no silver bullet. If only someone had told us that years ago!
[1] - https://www.joelonsoftware.com/2001/04/21/dont-let-architect...