I've been programming for a long time now, close to 40 years. Back when I first learned programming "top-down programming" was the rage. This essentially means you take your top-level functionality, and break down that into steps, each of which become its own module, and you keep decomposing each step into substeps unto you have code.
There was a particular Zen, a flow, about top-down programming. You roughly knew the next thing you should work on, and the conceptual stack of things you had to remember were tracked easily by the stack of procedure names.
This is fairly close to what the author is talking about. "OO design" has been a disaster, but as a basic tool of coupling data with behavior, it works fine. If a class happens to match a real-world thing, well that's just a coincidence. A programmer's job is to get the computer to do what you want, not somehow model the world.
At the same time, the right abstraction can make all the difference. While tactically, my process is very similar to the author's, what turns workmanlike code into true craft is finding the right concept that dramatically reduces cognitive burden on the programmer.
In that 40 years, you will have seen all sorts of things come to the fore and then fade back into the background as the next "big" thing comes along. What most have not seen is that these are tools that we can use to solve the problem-space problems placed before us. The point about abstraction and choosing the right abstraction for the problem at hand requires each of us to be able to communicate clearly with the problem domain subject matter experts and understand what they need from us.
Understanding what kinds of abstractions are available to us and how to apply is important. There is a proviso however, and this is where even though two problems may seem to be similar and could be usefully use the same abstraction, one must understand the problem space ramifications of those abstractions as the divergences can often come back and bite you.
Over the decades, many tools have been developed and each of them has some use in our toolbox, whether it be top-down design, bottom-up design, refactoring, objects, values, functional programming, assembly programming, static typing, dynamic typing, etc., etc., etc.
They do not have universal application, unless of course, your favourite is a hammer and everything is a nail. These tools allow us to solve different kinds of problems in a less laborious way.
If anything, the last forty years has shown me that, as a whole, each generation of programmers is unable to learn from the previous generations. We get so caught up in our various wars over which languages or techniques are the bee's knees that we often forget that we are supposed to become craftsmen and craftswomen, able to solve the problems placed before us using whatever tools are available.
Each of us will have favourite tools, but we had all better be prepared to be competent and pick up whatever tools we are given and solve the problems before us.
Agreed. Something that got lost in the JavaEE world was that design patterns are there to make the problem easier to solve by leveraging the solution to a similar problem. Instead, many JavaEE programs are just class after class after class all starting with the same prefix and having some new verb attached to the end like "Manager"
>A programmer's job is to get the computer to do what you want, not somehow model the world
This so exactly mirrors an ongoing debate in cognitive science. Programmers and philosophers should be talking about representation versus affordances and situated cognition. Actually I suppose the same thing goes back to Chomsky vs Skinner.
I think C++ classes and templates are incredible for data structures.
In the java world, people try to make their entire program out of data structures, even complex data transformations, and unfortunately it ends up being a disastrous case of square peg in round hole.
What also helps immensely is the fact that C++ supports free functions. Free functions + templates allows for generic logic which is separated from the data they operate on, something Java doesn't allow in any remotely easy way. Yes, there are utility classes with static functions, but that is just horrible.
I agree with everything he has to say but there's an important caveat missing: In his example, the code he starts with is trivial to understand. I can read through it linearly. I can see how the variables are used to achieve the layout. The functions draw_title() and draw_big_text_button() have obvious meanings, partly because I can see x and y coords being passed in.
Comparatively, the code he ends up with is hard to understand. If I was new to the code, I'd be wondering what layout.window_title(title) does. I'd have to lookup the definitions of Panel_Layout data structure and window_title() function. Instead of reading linearly, I now have a tree traversal to do. That is harder.
However, he's not wrong. The changes he made allow more things to be added to the UI panel without the code becoming an incomprehensible mess.
Let's call the original code the linear one. I think the reason his refactoring is good is because the linear function would otherwise have passed a threshold where it was too complex to fit into the working memory of the programmer. The tree traversal exercise he is forcing the next programmer to do serves a purpose. Each function and data structure he introduced is small enough to fit in the programmer's working memory. After looking up those definitions, their (simple) behaviour should be able to be parked in the programmer's medium term memory. Once that investment is made, use of working memory is reduced, freeing up the possibility of adding more stuff to the UI panel function without it passing the threshold.
There's a trade-off. In some ways he made the code worse. But it was necessary. We should always be asking ourselves if the abstractions we are introducing are necessary. Understanding the constraints of programmer's brains is vital to being able to make that judgement.
At one point I even thought he might conclude with that despite a very semantically compressed solution he reverted to the original solution but with a minor twist and some lessons learned - would have been a lovely twist :)
In this case it probably was very worth while since it was so commonly used. These kind of simplifications are often riddled with trade offs that you get better at identifying and making informed decisions about as you mature and gain experience.
I do agree with pretty much everything written, though I do feel that it is hard to avoid writing bad code in the beginning (but we can certainly improve on it!).
I think that functions should be small & minimal to no side effects & trustworthy. That way you can see something like "layout.window_title(title)" and not need to go read that code because it's not necessary - unless you were specifically making a change in that area. Even if it can make debugging slightly trickier with the extra tree traversals, it also makes debugging easier at the same time because the logic is more compartmentalized and you don't have to read nearly as much code to solve the issue.
I love reading these articles that start with “everything that most programmers have been doing to write software for 30 years is so obviously bad that I don’t even need to bother supporting this argument”
When this article came out a few years ago, I reacted as you did. Whatever number of years of experience I've gained since then has taught me, with extreme clarity, that the author is correct about object oriented programming.
I've certainly learned a lot from the game programmer crowd, and I often find myself agreeing with the anti-OO sentiment these days, but why do they have to be so freaking smug about their favorite techniques? I don't see this kind of sneering prose nearly as often in other programming domains. It pays to be nice, FFS!
I wholeheartedly agree with the contents of the article, but I have to hold my nose while reading it.
> why do they have to be so freaking smug about their favorite techniques?
Because they haven't yet spent 30 years watching those techniques being used successfully only to read that, in fact, they obviously don't work at all.
I wonder about his distaste for "refactoring" the term and the concept, since this seems to be pretty in-line with what I understand it to be.
Obviously plenty of architecture astronauts and cargo cults "refactor" their stuff and produce piles of SpaghettiOs (still spaghetti, but in nice little modules), but that's a straw man, isn't it?
I'm a big fan of passing fewer parameters in favor of a single struct - I'm glad that Casey unintentionally demonstrated the benefits of this. Panel_Layout can now be passed around by-ref in the event that this UI becomes large enough that it would warrant a few methods to build it.
I've been thinking about this recently. I like passing a struct if all/most of its attributes are used. But if the struct has, say, six attributes, and the function uses only two, it might be clearer if the function only took those two as parameters... While it might not be the most ergonomic solution, I think there's something to be said for clearly marking which parts of the struct are actually used in a function.
Also, thinking about it further, it's relatively easy to "lift" a function `String → Baz` to operate on structs like `{x: Int, y: Int, z: String}` (see Haskell's Lens), but it's not really possible to do it the other way.
You're right, but it can still make it obvious to the reader that you don't actually use all of the struct in a function. I think GP meant something like this:
(Haskell syntax)
-- struct with three fields
data Bar = Bar { x :: Int
, y :: Int
, z :: String
}
-- function from Bar to Baz
foo :: Bar -> Baz
foo (Bar _ _ z) = ...
`foo` takes a whole `Bar`, but in the pattern matching bit you explicitly say you don't need x and y by using `_`.
Also, by enabling an extension[0], you can instead do:
I like a mental model like a real world factory storage: If you have few kinds of objects you just go with random placement. If you end up looking for things you organize them alphabetically. When that gets to annoying you number the locations and keep a list of what is where. If the list gets confusing in turn you split it by type of component or product. Eventually you use search engine returning a building and coordinates.
Each of these is great until it isn't. The real magic is in anticipating what you will need and making the switch from one to the next structure.
Who knows, it might one day be possible to have the same code displayed as either:
give john a cookie
give jack a cookie
give sam a cookie
give tina a cookie
and
giveCookies(john, jack, sam, tina)
and
giveCookies(testSubjects.living)
Only visually depending on the number of living test subjects without the underlying code changing at all.
How we are to accomplish this I leave as an exercise for the reader.
He talks smack about OOP but launches into talking about his latest code project instead of explaining why it's bad. This guy would presumably also hate relational databases, since you have to think in a similar way in order to design those.
class-oriented programming and object-oriented programming are not the same thing, and they don't serve the same purpose.
Class-oriented programming is used to define new types, like std::vector for example, whose caller know about the existence. This is where the coupling between data/logic happens.
Object-oriented programming is all about interfaces (abstract classes) and tell-don't-asks (in one word: 'messages'), and is a tool to define boundaries between modules.
The author seems to rant about class-orientation. Let's not throw the baby with the bath water.
"Well, OK, if you do it this way it kinda works. But design, design is absolute horseshit"
Give it another decade or two, and maybe he'll encounter a code base that justifies using OO design as well, and he'll write a new article (on how OOD is horseshit, except when you do it his way :).
It's almost as if you choose different tools for different purposes.
Language and program evolve together. Like the border between two warring states, the boundary between language and program is drawn and redrawn, until eventually it comes to rest along the mountains and rivers, the natural frontiers of your problem. In the end your program will look as if the language had been designed for it. And when language and program fit one another well, you end up with code which is clear, small, and efficient.
Obviously, as pg points out later on, this is much easier to do in lisp, but that doesn't preclude you from using this style in other languages
I love this and I can't wait until my manager announces we'll be doing compression-oriented programming. Which means atleast 8 meetings about how to use Visio to model business processes and map that to Java classes. But with compression this time.
This has been posted on HN multiple times before, invoking little discussion if any.
Author does nothing of interest. Writing lots of boilerplate and then figuring out how to reduce it seems wasteful. And calling the alternative to his approach OOP would be a false dichotomy.
You can do the same with much less mucking around with boilerplate by creating data classes first and business logic classes later.
In Delphi you have best of both worlds. You can create a form and when you inherit from it, you can re-arrange all the UI elements and add more to your liking. If inheritance means "I want to reuse these properties" OOP works well.
"Writing lots of boilerplate and then figuring out how to reduce it seems wasteful."
"You can do the same with much less mucking around with boilerplate by creating data classes first and business logic classes later."
That's exactly what the author is trying to argue against. The authors point is that because business logic changes, and that because you can't always be omniscient, it's hard to write the correct abstractions the first time. So although it sounds wasteful to write boiler plate and then abstract, it's less wasteful than abstracting, then coding, and then later having to change it all because you got it wrong.
I think he's demonstrating some important coding skills, but the changes he makes are very local, and it's also important to think about overall structure and interaction between program components The methods he's ridiculing have some utility for that kind of work.
There was a particular Zen, a flow, about top-down programming. You roughly knew the next thing you should work on, and the conceptual stack of things you had to remember were tracked easily by the stack of procedure names.
This is fairly close to what the author is talking about. "OO design" has been a disaster, but as a basic tool of coupling data with behavior, it works fine. If a class happens to match a real-world thing, well that's just a coincidence. A programmer's job is to get the computer to do what you want, not somehow model the world.
At the same time, the right abstraction can make all the difference. While tactically, my process is very similar to the author's, what turns workmanlike code into true craft is finding the right concept that dramatically reduces cognitive burden on the programmer.