Hacker News new | past | comments | ask | show | jobs | submit login

To boil it down simple:

People are vaguely good and competent, they leave systems in a locally-optimal state.

In general only changes that are "one step" are considered, and they allways leave things worse when you are currently in a locally optimal state.

A multi-step solution will require a stop in a lower-energy state on the way to a better one.

Monotonic-only improvement is the path to getting trapped. Take chances, make mistakes, and get messy.




I think "better" is ambiguous.

Better for developers? Better for users?

Better for speed? Better for maintenance? Better license? Better software stack? Better telemetry? Better revenues through subscriptions?


I would disagree with that. We have quality measures to assess what 'better' can mean in software engineering (e.g. maintainability, reliability, security, performance, etc.). You are right that it is not fixed what is most important. It may vary from one organization to another, but it can be conceptualized and then made measurable to some degree.


At a business, all of these - a good engineer/architect has to find the right balance.


Different engineers can have different interpretations of "right balance" -- and many of them may be correct. Which makes "better" again ambiguous.


I think this is actually addressed in the article:

> The key question for the designer is, “What would the system’s structure need to be so that <some feature> would be no harder to implement than necessary?” (It’s a bit surprising when designers don’t ask this question, instead simply asking, “What should the design look like?”—for what purpose?)

During my career, I have been in many situations where the SW architects tried to answer the second question: as if the architectural cleanliness was the goal unto itself. Software design patterns were misused, unneeded abstractions abounded everywhere, class hierarchies were created 15+ levels deep. There it was often brought up which is better and nicer and cleaner because the metric was aestetics.

Most of those arguments, however, are quickly brought to a stop, if we are actually asking the first question: how hard is it to add these new features? That said, I was frequently unable to convince coworkers in my past employments, that aestetics of the design is not the goal. They simply clung to it, to somewhat religious extent, identifying themselves with their "artwork".


I have concluded that smaller is better and straightforward is better. I think it’s easier to scale up a small system than to maintain a complex system that was built for scale from the ground up but usually got some things wrong because the requirements at the time weren’t clear.

But in the end there is never a clear answer. I am happy when people can explain what the positives and also the drawbacks of a design are. Pointing at “best practice” without explaining pros and cons is usually a big red flag.


Indeed, could not agree more. Also, using composition over inheritance is age old advice but that still did not prevent those architectural astronauts from creating inheritance structures 15+ levels deep. Luckily, newer languages make constructing such monstrosities harder and discouraged.


"Luckily, newer languages make constructing such monstrosities harder and discouraged."

They just encourage constructing monstrosities of a different kind. I don't think people who do stupid things in one paradigm will do any better in another paradigm. I see that a lot in the microservice vs monolith debate. If you can't manage a monolith you will also screw up microservices.


Just having a huge bikeshedding festival at work after I wrote this sentence into guidelines...

The worst thing is how hard it is to talk about it because all the books - by authors making more money from talking about software than from writing and maintaining code - recommend it, and it's just so SOLID and Hexagonal and looks obviously intuitively correct.


Can you elaborate, you have piqued my interest :)

The sentence for preferring "composition over inheritance" for code reuse is in the book by the gang of four (the design patterns book). I really don't understand how we ended up in the situation, where 30 years old advice is still valid and still not followed. I lay, perhaps too much, blame on Java, which seems to have this baked into its infrastructure, but similar approaches have also been adopted in C++ with multiple inheritance making things even worse.

I mean, SOLID, when used appropriately, is also valid. The problem is that the design patterns are used where a simpler solution would work just as well.


This is exactly the problem - the advice is valid, but the developers can't see their implementation is not, and my examples of maintainable code seemingly go against the advices.

They think their huge class diagrams and statically unverifiable mess of many structurally identical classes (in Typescript) is an example of DRY, composition, separation of concerns, inversion of control and encapsulation - all the great advice neatly packed into 50 files opaquely interconnected through dependency injection containers, where a simple 100 line function would have done the job and wouldn't cause a major headache for the poor guy who has to fix a bug in 3 years.

The root issue is that these guys never were the poor guy who has to fix a bug after 3 years. They moved on after a year or two of "implementing best practice approaches" to the next job.


> "What would the system’s structure need to be so that <some feature> would be no harder to implement than necessary?"

This sounds good, but in my direct experience it is really really hard.

For example, sometimes you have a feature that is really easy to add. Just add a new argument or keyword or command and implement it in the guts.

But every once in a while you get a beautiful architecture that has a "direction" to it. And a horrendous requirement comes along and breaks everything. For example, port it to macos. Or add and call this third-party library. Or break it up into an SDK, a CLI and a web service.

sigh. guess that's why this kind of career keeps you on your feet.


Ambiguous only in the sense that it doesn’t always mean exactly the same. The direction of better on a graph is always in the same direction though, even if the point isn’t at the same x,y coordinate.


It's not ambiguous, it's a collective decision between engineers and business stakeholders. The ambiguousness comes from engineers not having full information.


I find the biggest issues in industry and organizations is so called “tech debt” and no plan for future improvement of a solution as it matures or user base scales. Planning for these is essential.


I dont think users care about sofwtare design.

At most they might care about non funcional requirements (e.g. security and performance)


Which is why facebook had the motto "move fast and break things", you need to break bad abstractions to get to good abstractions and solve problems.


I decided i would prefer to quote Ms Frizzle instead of this.


Unfortunately it happens often enough that you manage to sell the step where you make things worse, but then you never get management buy-in for the step that makes things better again.


TL;DR you can't make an omelette without breaking eggs?


You can always make it with blood instead.


LOL


> Monotonic-only improvement is the path to getting trapped. Take chances, make mistakes, and get messy.

The evolution disagrees.


Evolution is a satisficer not optimizer.

All major trophic level breakthroughs are powered by evolving a reserve of efficiency in which mult-step searching can occur.

multicellular life, collaboration between species, mutualism, social behavior, communication, society, civilization, language and cognition are all breakthroughs that permitted new feature spaces of exploration that required non-locally-optimal transitions by the involved systems FIRST to enable them.

Trust is expensive and can only get bought in the presence of a surplus of utility vs requirements.


Wow, very well put. Any suggestions for academic papers, books, or even online resources on these topics would be greatly appreciated.


This is related, and it is the paper that lives constantly rent free in my head. I think it will retroactively be viewed as revolutionary: https://www.alexwg.org/publications/PhysRevLett_110-168702.p...

Basically, intelligent behavior is optimizing for "future asymptotic entropy" vs maximizing any immediate value. How intelligent a system is then become a measure of how far in the future it can model and optimize entropy effectively for.

(updated with pdf link)


Great paper! There are some similar ideas to this in game theory and reinforcement learning (RL):

[1]: Thermodynamic Game Theory: https://adamilab.msu.edu/wp-content/uploads/AdamiHintze2018....

[2]: piKL - KL-regularized RL: https://arxiv.org/abs/2112.07544

[3]: Soft-Actor Critic - Entropy-regularized RL: https://arxiv.org/abs/1801.01290

[4]: "Soft" (Boltzmann) Q-learning = Entropy-regularized policy gradients: https://arxiv.org/abs/1704.06440



I didn't say that evolution finds the optimal state, just wanted to highlight how far it was able to go, much farther it seems.. (like evolution of the eye)

But your comment was refreshing, could you briefly expand on the "multicellular" life part? Did you mean that it enabled more non-locally-optimal transitions, or that it required them to appear?


I think cooperation is never a locally optimal strategy. Somebody allways gets to pick second at the prisoner's dilemma table, and locally optimal behavior is to eat the trusting idiot.

Takes a lot of luck to evolve cooperation multiple times at once, much more likely to happen in a situation where the selection pressure is lower, not higher.


Now you get into the definition of "locally". Gene pool local or individual local? I think it's evident that cooperation has proven highly effective at the gene pool level. Will it prove to only be effective short-term local and flame out over longer-time spans remains to be seen. Will there be anyone to document it? Not sure, but it's been a helluva ride.


Pretty sure those non-cooperative strategies quickly burn themselves to extinction though. The selection pressure itself would be regulated towards an equilibrium.

The thing about evolution is that you are sampling many times in different directions. So "luck" isn't that hard to achieve.


> Pretty sure those non-cooperative strategies quickly burn themselves to extinction though.

> Pretty sure those non-cooperative strategies quickly burn themselves to extinction though.

Um, most life hanging out in the same tropic level or lower is heavily predated upon. Competition is the norm.

Luck is hard for cooperation because it is a coordination problem. You basically have to evolve cooperation entirely as a unexpressed trait then trigger it in the population almost simultaneously. The mechanisms of cell cooperation are critical dividers on our evolutionary trees for a reason, they are rare and dramatic in consequence. Cell populations regressing in terms of coordination behavior (see cancer) is one of their most problematic failure modes and it is only very weakly selected against.


>Um, most life hanging out in the same tropic level or lower is heavily predated upon. Competition is the norm.

I'm referring to the predator-prey population cycles. If you overexploit your prey you are going to run out of food and see your population thin out rapidly from starvation. Hence hyper-competitive strategies would get outbreeded by less competitive but sustainable strategies.

High predation levels would require equally high cooperation levels amongst prey to ensure rapid reproduction to sustain the food supply. If we go down the food chain it's the same thing, plant life, celluar life, etc, has to be flourishing to sustain the upper levels.


You quoted this part,

> Take chances, make mistakes, and get messy.

But then seemed to indicate evolution disagrees.

I might be misunderstanding your point, but it sure seems like, evolution tries a bunch of stuff, and whatever reproduces kinda wins.

That seems like, take chances, make mistakes, get messy. That seems like the core of evolution.

Could you clarify or refine what you’re saying? The two seem at odds.


So that was a bad quote, I only wanted to address the part that mentioned monotonic-only improvement, since to me, evolution has achieved more than I'd imagine, evolving organs like the eye incrementally.

I got inspired by this article: https://writings.stephenwolfram.com/2024/05/why-does-biologi...


Basically, the root disagreement was "monotonic improvement". Evolution is awesome, but it couldn't work with only monotonic improvement.

I used to do an "optimization" on my genetic algorithms. I'd ensure the highest scoring genome of the last population was a member of the new one. It made sure every single generation improved or stood still.

It was a good idea to keep a copy of the "best" genome around for final output, but by keeping it in the search space, I was damaging the ability of the algorithm to do it's job by dragging the search space constantly back to the most recent local optima.


Evolution regularly ends up in local optima that it struggles to get out from. Species go extinct all the time when there's no evolutionary path that solves its problems.


And on the flip side, a sufficient abundance of resources and/or lack of predators mean non-optimal species can procreate, and thus find other local optima.


In terms of evolution, the fitness of a species is defined by its ability to reproduce. In the circumstances you describe, selection pressure exists for the species that can reproduce the fastest. Predators or resource constraints are not a requirement for evolution.


> In the circumstances you describe, selection pressure exists for the species that can reproduce the fastest.

My point was there's no pressure without constraints. A faster-reproducing species will only apply a pressure if starts exhausting a resource or similar.


I didn't mean to say that evolution avoids local optima, but I wantend to say that it doesn't have to get "trapped", in the sense that it was able to produce such complex organisms as humans..


Because of the incremental nature (it can't think moves ahead) only a tiny percentage of all viable life forms can be produced by evolutionary processes. Species forged by their ruthless struggle for survival. Maybe humanity will be the first species to escape evolutionary constraints, but maybe humanity is like the many other species that burn brightly but briefly. The universe seems to be cold and empty and devoid of life. Perhaps evolution is very good at producing cockroaches and not that good at creating intelligent life.


Right, but it could well be that there is some other greatly superior organization of life toward which there exists no evolutionary path.


What makes you think we're not trapped in some really mediocre local optima? Tree dust makes me sick.


>> Monotonic-only improvement is the path to getting trapped. Take chances, make mistakes, and get messy.

> The evolution disagrees.

It’s not an either/or. Vast modularized localized improvement allows for the ability to prune and select what does and doesn’t work.

https://hbr.org/2020/01/taming-complexity


<gif="blinks in Cambrian Explosion"/>


> The evolution disagrees.

Ah yes monotonic-only improvement by way of making every small, messy mistake possible and still probably going extinct is definitely the way to go




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: