Hacker News new | past | comments | ask | show | jobs | submit login
Common Mistakes in Modularisation (two-wrongs.com)
96 points by thunderbong 3 months ago | hide | past | favorite | 26 comments



I disagree with most of the article. The author states that the reason we start at the bottom is because "it feels good". This is false. We start at the bottom because that's what a product spec is. The product spec can impact any node but will mostly be at the bottom. So, we must start there and work backwards to figure out what engineering choices will meet that product spec.

Second, the author talks about building "program families" instead of just a program. The problem with this concept is that it is essentially asking you to divine the future. Your success at divining in what way the program will change is what determines the success of this approach. That is to say, if you're in a space where you absolutely cannot predict in what way the requirements will change, the optional strategy is to just build for the requirements right now.


Not only that, but the phrase of "program families" is poor imo. The phrase he's looking for is "a class of programs", similar to the ideas that in diff EQ you don't derive a formula, you derive a class of formulas that you can then readily plug in values to get the specific formula you need.

But also, his point about configuration creating slightly different programs is true, but modularity suffers from the exact same problem.


For a phrase representing the highly unformalized concept of where a software project may go in the future relative to unpredictable customer, business, resource and developer needs, "family" seems appropriate. There is continuity, but not necessarily any other logical relationship being maintained.

Sets of differential equations with specifiable constants have more in common with a compiler exposing customizable parameters, forming a "class" of behaviors. Or a parser compiler taking a grammer, and producing a well defined "class" of parsing algorithms.


The article makes excellent points. Many engineers (and the entire "agile" paradigm) is focused on making the leaves overly concrete and calling it major progress.

This is due to significant cynicism about many aspects of product and engineering.

I've encountered many systems built leaf-first that end up with no real architecture and are just refactor after refactor that ends up usually having a few "highly skilled" developers who know it inside out but nobody else can figure it out because it is by this point so illogical. The worst is when old names are kept but they no longer mean what they mean in plain language.

System design and architecture (especially when the product is open-ended like any startup) is hard, and over-polishing leaves is comparatively easy, and many startups fail before they face the music for no architecture and the engineers go on to the next job thinking they did everything right.


This post is not wrong, but it fails to distinguish between high-level product decisions and high-level engineering implementation decisions. They are too sides of the same coin, to be sure, but there is an incredibly common failure mode where product and engineering "agree" on a high-level direction, but each is holding different assumptions in their head about what that means. That's why framing like:

> The mistake of focusing on concrete details

is incredibly dangerous. Engineers love to think about abstractions, and leaders managing a lot of people are all too eager to buy into the promise of magic abstractions that align everyone and produce optimal results with the army of worker bees figuring out all the details of the perfect vision created by the god-like leaders.

The only problem is that the devil is in the details. Always. The only way you can make a good decision at the top of the tree, is by being an incredibly seasoned expert that has a strong intuition about how the details are going to play out on the leaf nodes. You need this on both the engineering and the product side. And even then, you will always discover things once you start building, and if you care about a quality product you will have to back things out. The only defense against this is relentless focus on simplicity, which tends be a viable option (in the consumer space at least) because at the end of the day, your users have finite attention spans and do not want to have to deal with anything too complicated.


> The only problem is that the devil is in the details. Always. The only way you can make a good decision at the top of the tree, is by being an incredibly seasoned expert that has a strong intuition about how the details are going to play out on the leaf nodes.

You got that right. Being developing over a decade for different kind of companies. "the devil is always in the details"

However it is below that gets in the way:

> is incredibly dangerous. Engineers love to think about abstractions, and leaders managing a lot of people are all too eager to buy into the promise of magic abstractions that align everyone and produce optimal results with the army of worker bees figuring out all the details of the perfect vision created by the god-like leaders.


I wonder if domain specific templates/views could be made to help bridge the gap between the leafs (layouts, colors etc) to the base constraints (performance, security etc). Engineers wind up acting like compilers for the product owners ideas and sometimes that shit don't compile and it can be pretty tough to explain why not.


The leaf elements are often coupled by a protocol, even if it's as simple as "call this API to return a result". Domain language is derived from seeing that a certain conventional protocol is used over and over.

So to me, protocol is the focus of modularity, because it lets you talk within the domain. However, protocol isn't addressed by generalizing it: that just means you have added a container for the meaningful part of the content.

Something that comes up when working in a point-free style, as in Forth or APL, is that protocol is shoved in your face immediately because the sequence of arguments now matters, and that has downsides(stack underflow errors?) but also upsides (conciseness, ease of factoring). When I reach for that, it tends to mean that I intend to find a tighter specification for the domain.


I reject the idea that the software world is somehow unique in the "Configuration options create multiple products" idea. One need only look at cars to see an easy example of a similar frame being the base of several models.

Now, I think one could take the idea that an engine and a frame can both be seen as modules of a complete vehicle. But... that really just underlines my point that software is not unique here.


> This, ultimately, leads us to what may seem like a paradox of product design.

> - We need to focus on the high-level design questions first, because otherwise we will make incoherent detailed design decisions.

> - It is important that we get the high-level design questions right, which we can only do if we postpone them for as long as possible.

This rings true to me. At least I can’t find a way around it per se.

In practice I have two ways to deal with (but not fix) this problem:

1. Make it easy to prototype. Trying out is often the fastest way to disprove a hypothesis, or modify it.

2. Don’t over-invest in the solution, even if it’s for production. Instead, pick the easiest solution. Almost all overengineering I’ve done have been wasted, because high level decisions were changed anyway. Leave a comment but don’t implement.

At the end of the day, you have to develop a gut-feeling for unknowns. Realizing what you don’t know is an invaluable asset, if you shift your method accordingly. The more experience I get, the more I realize how little I know. On the plus-side, it’s wonderful when you do find those unexpected problems that are very clear and optimizable, and a lot of fun.


This post is a Rorschach Test.

// As reflected in these comments.


Perhaps a small nit, but the “design space tree” looks more like a directed graph.


I agree with most of the article, and I will try to incorporate some of the concepts in my future discussions with colleagues and external consultant.

I do have some problems with this, though:

In practise we never end up walking the design tree forever. Residue from our past mistakes (the dotted nodes) lingers in the codebase and makes it ever harder for us to walk around the design tree. Eventually, fresh competitors will be able to do it quicker and take our users.

I mean, is this really the case? Maybe it is if you work with smartphone apps, but apart from that, I cannot really think of an example not only in my own niche (administrative systems in mid-to-large companies, so ERPs, Reservation systems etc.) but also in terms of things that I never worked on, but I use regularly.

E.g.: Google took out all their attempts at creating a social network (I remember Orkut and Google+, probably there were others). They were the competitors (to Facebook) and no matter how "nimble" they were in adding or changing features, in the end they just quit.

Word/Office is probably another classica example of something that has lots of "design decisions" layered like geological strata, but they are still dominating the market (I am Mac User, btw, so I am not talking of Open Office as a "innovative alternative", here.)

The authors seems to think that customers can't wait to get rid of, I dunno, Oracle Finance, as soon as someone comes in with more feature... in my experience the reality is that no matter how much they hate the old workhorse, having to convert lots of existing data (and adapt processes, and retrain personnel) is a cost which is very difficult to justify, and no amount of "shiny new features" will really be the main driving force in such decisions.


> I mean, is this really the case?

"Disruption" is a smaller company with less baggage finding a way to do something better than some ancient behemoth. Pick any of many examples that show that's the case.

If it weren't for disruptive startups, HN would not exist.


The point isn't that disruption can't happen, as we know it can. The point is that is can take a lot to dislodge an entrenched product, usually more than just new features, especially rarely used new features.

As pointed out, the main driver is switching cost. Going to HN has low switching cost. Moving off Oracle has very high switching cost. I know of companies that have been actively trying to move off mainframes for 30+ years and are still paying switching costs.


Yes, this was exactly my point, and thanks for having articulated it better.

In my experience "Wow, X provides much better features than Y" works in two cases:

a) I am doing product selection, i.e. I am not using Y or anything else, but I now need to do something new and X will get my money because it is a better fit.

b) I am already using something, but switching to X has negligible cost. Like, I am tired of Evernote, I'll switch to Whatever, they even provide a webscraper that will automate migration with little effort.

So yes, especially for "a" the competitor will gobble up your (potential) new customers. But this is not exactly the most common scenario.

I understand that HN is geared mostly towards startups and SAS companies, but we shouldn't forget that the vast majority of companies were created before 2023 (or 2010 or 2000), therefore they have so much invested already in dinosaurean applicative stacks that "implementing new features faster" will not be enough to make them switch.


> "Disruption" is a smaller company with less baggage finding a way to do something better than some ancient behemoth.

It's much more complicated than that, and it's really worth reading https://en.wikipedia.org/wiki/The_Innovator's_Dilemma to understand the concept.

In general, disruption happens when the rules of the industry change. It's not just a matter of a small company with less baggage undercutting an established player.


disruption isn't a single thing, it can absolutely come from a more nimble competitor.


You're confusing disruption with ordinary competition.

See https://en.wikipedia.org/wiki/Disruptive_innovation

IE, if business A looses out to business B because it's "more nimble," that's not disruption.

IE, if/when electric cars take off, gas stations will be "disrupted." Not because the competitors were more nimble, but because the fundamental rules of how people charge cars are different than how people buy gas. (Most charging happens at home or at a destination, therefore a gas station that simply swaps out pumps for chargers won't see the same kind of business.)


and I think your language skills are lacking, words can have multiple meanings with subtleties. I doubt you would get most of the general population to agree that a competitor knocking someone out of the running or forcing them to fundamentally change how they do things is not a disruption.

you can even see it in the wiki article you linked. They gave it a unique phrase (disruptive innovation) rather than just saying "disruption" followed by a mic drop.


> They gave it a unique phrase (disruptive innovation) rather than just saying "disruption" followed by a mic drop.

Technically you are right. But other uses are right too. Audience context matters.

So many people here are familiar with "disruptive innovation" vs. "non-disruptive innovation" and "sustaining innovation", and find those distinctions relevant, that "innovation" can often be dropped without much confusion here.

I.e. "disruptive", "non-disruptive", and "sustaining" + <competition>, <technology>, <strategy>, <business model>, etc.

It would be very cumbersome to include "innovation" in every one of those phrases.

It would beat the word to death 1000 times over, if every business or technology discussion had to keep explicitly saying it, when it can almost always be assumed to be relevant.

Language isn't just definitions, it is also context/culture/group sensitive. Which doesn't make you wrong - but it doesn't make the more specialized usage wrong or less useful, if you are sensitive to it.

When in doubt, use the natives' interpretation.


I find it amusing that I stepped into this conversation stating disruption can mean multiple things and you felt I was the one that needed to be lectured on the idea that disruption can mean many different things.

go lecture someone else.


I don’t think I’d ever call anything from google “nimble”


They were pretty nimble late 2000's, early 2010's.


Maybe in the 2000s, by the 2010s they just started products that would fail (or not sometimes) and kill them.

Maybe that fits a definition of nimble, but I wouldn’t call it that.


"we figure out the interface that matches all possible decisions on that question, and write the code against that interface"

i'd sure like to use this button somewhere else, but i'm not clever enough




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: