Hacker News new | past | comments | ask | show | jobs | submit login
Obfuscating Complexity Considered Harmful (rule11.tech)
29 points by definetheword 8 months ago | hide | past | favorite | 10 comments

> Using abstraction to manage complexity is fine. Obfuscation of complexity is not.

The post pushes these words around without giving any indication as to how to actually do anything. Seems like an example of moving definitions around.

The important thing when dealing with complexity is to decompose it into simpler parts that combine neatly. This is one meaning of 'good abstraction'. I don't like the word abstraction, as I also don't like the word refactor--they almost explicitly sound like arbitrary collecting/moving things about.

The word I use is 'factor'. I say I'm factoring the code. Then you have to be able to name which factor a particular piece of code has dealt with. If you can't answer that, you're just moving stuff. And once that's done, it shouldn't immediately need to be redone/refactored.

If something is complex because it has N dimensions, then try to create N components to deal with each factor as is possible rather than create refactorings that deal with some combinations of dimensions that follow chronological developments rather than structural ones with empty seams.

I got the same feeling from this post. It admonishes creating poor or unnecessary abstractions without demonstrating how to identify and avoid the problem.

How can you look at one interface/abstraction/api and concretely say “this is sweeping complexity under the rug” - what exactly makes that a fact instead of just an opinion?

If you entangle different interfaces together in the same place, changing them becomes even more complicated and error-prone. So adhering to SOLID principles leaves you with hopefully less of a mess. Usually each interface has each their own nuances, but also how to compose them in scope, processing sequences and time, may introduce complexity, side effects and additional workarounds, or frameworks.

Don't just adhere to something blindly. Keep it as simple as possible and iterate on designs until they make practical sense. The focus is on concrete implementations and results.

Yes, it's otherwise all opinion and fashion.

One such is that all abstractions comes with costs, and that they are usually just indirections and filters for the real thing. The word itself, is tough to communicate clearly and reason about. Ditto on refactoring, which doesn't explain much as well.

Concretely, if you minimize dependencies, entanglement and compose a minimal solution well, it is easier to evolve the code later after learning more from practical experience and new ideas. So it's often about waiting until the benefits clearly outweighs the costs of abstraction.

I'd say obfuscation is creating a level of indirection just to look modular. Moving things complex dependent side effects into separate functions just hides what is going on instead of clarifying it.

  void MajorFunction( void ) {
This can often be better inlined because then the complexity is obvious.

Naming and shares use can make this one or the other. If it is a specific sequence of things that are commonly done, then putting it in one place so an update will keep all call sites in-sync. A small and simple function can sometimes be worth it, as opposed to that which 'mechanically' done (by humans) not with semantic significance but rather 'I can dedup this'.

The litmus test is if a reader sees the function name called somewhere and presumes what that does. If there's high confidence (across all people) in that presumption being correct and if they were to look at the body would see it do precisely what they thought. It doesn't matter if it does it differently as long as it has the same result as the expectation. Also don't name it one thing and do two things in there.


RFC 1925 "Fundamental Truths of Networking", section 2:

  (6)  It is easier to move a problem around (for example, by moving
       the problem to a different part of the overall network
       architecture) than it is to solve it.

       (6a) (corollary). It is always possible to add another level of

A week or two ago, I was working with a colleague who wanted to get me set up to deploy my code to an AWS service he had provisioned for us. "I use this tool that makes it super-simple," he said. It's true, typing the command to deploy takes no time at all.


It took an entire day's work to install the prerequisites. Docker. Node.js. The authenticator tool that for some reason needs an entire headless Chromium. The tool that actually does the deployment. Configuration files with magic strings in them. YAML. JSON. After a year and a half of fairly stable operation (at least as good as you can get from Windows 10), my work laptop now randomly bluescreens. Neither he nor I understands what most of this stuff does -- it's tribal knowledge he found on a corporate OneNote somewhere. I have no idea what we just automated away, or how difficult it would be to send my 500 lines of Python code to this cloud service by hand.

On the plus side, I can now deploy my code by typing ten characters in the console.

Uninstall Docker Desktop and restart Windows. Should be better after that abomination is purged.

As a software architect, I can tell you that complexity is bad. For architects, there are two types of complexity; accidental complexity and essential complexity. The latter is inherent in the problem in that reducing essential complexity means that you are not solving the problem that you are supposed to be solving.

When I say problem, I don't just mean the functional requirements. I also mean the so-called non-functional requirements. If you are familiar with use cases or user stories, then you know of functional requirements. Non-functional requirements can take the form of Service Level Agreements but also include other metrics such as the costs to running the system and feature velocity.

Accidental complexity is unnecessary to solving the problem. It is a clear win to reduce that. Often you get to the point where you still have a lot of accidental complexity but cannot affordably reduce it any more. That is when you obscure complexity. That doesn't necessarily mean adding yet another layer of abstraction. It can also mean pushing the complexity from one already existing layer to another. This can be very helpful if the layer of abstraction that you are pushing the complexity too has a lower feature velocity than the layer that you are pushing it from. I have written on this subject over at https://www.infoq.com/articles/obscuring-complexity/

In this scenario, I would argue that obscuring complexity is not harmful. In fact, it can be quite helpful.

The Arch Linux philosophy summarizes this concisely: "If you try to hide the complexity of the system, you'll end up with a more complex system"

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact