Hacker News new | past | comments | ask | show | jobs | submit login
The Perfect System Doesn't Exist (alexkondov.com)
78 points by kondov 12 days ago | hide | past | favorite | 24 comments





A great skill I have been developing as I mature is the ability to look at a seemingly suboptimal implementation I receive and not be appalled by the perceived sub-optimality but rather appreciative of the history that is embedded in the little details. For me, any system producing correct and useful results is perfect under the constraints of its implementation. Even if the developer was not great, someone was great enough to mitigate for it. I have seen many such ugly, perfect systems.

That is a great skill. I’ve been calling this ‘respect for working software.’ There’s tons of benefits. First it improves your decision making with respect to evolving this piece of software, as you are now less likely to embark on a unnecessary rewrite or refactor. Second it lowers your stress as you can actually enjoy working on the piece of code. Overall you’ll be a (much) more effective SDE.

I call it software kintsugi[0].

https://traditionalkyoto.com/culture/kintsugi/


hey this is super good, might even attempt to put together a short blog post with this as the central idea! thanks!

I actually felt like writing it first[0], feel free to take inspiration from it.

[0]https://www.fer.xyz/2021/11/software-kintsugi


Excellent post and thank you for the quote! In retrospect a bit self referentially I would lacquer some things in my phrasing but I think part of kintsugi is that aspect of enabling the latent agency of corrections and additions to the body of initial intention that links all creators across time. In that respect your post is a much better state of things than me finetuning my words.

Nice. I'll have to use that idea.

You can get pretty damn close if you establish some reasonable constraints.

For me, a perfect system is one that is stable over long periods of time. Achieving these outcomes first requires careful selection of tools, frameworks and platforms. Building anything on unstable foundations is never going to end well over time.

We have large chunks of code that haven't been modified since ~2014. How many shiny frameworks and languages have come and gone between then and now? You can be certain I have basically forgotten about this code. Not out of disdain or neglect, but because it just works 100% of the time now.

We have a lot of problems to deal with in front of us, so we find the courage to properly settle things and move on.


Yep, that's my idea as well. We have code that has been updated for security but nothing else from the early 00's. It makes building very large systems with tiny teams much easier if you don't go after the latest & greatest.

Problem for me is if I settle, I start to settle every step of the way and 0.95^n is pretty small when n is not small. If you settle here and there you already can have something that’s 10% of the optimal thing. It’s a lot worse if you make a habit of it. I see some honour in sitting with my failure and struggling to be optimal than settle. I know it’s holding me back and that it’s mentally and emotionally taxing but I’m more terrified of descending far below mediocrity.

I feel your pain. So many developers are so quick to make concessions citing "premature optimization" or some stuff like that. Then again, modern hardware can simply brute force its way through inefficiencies, and it does save the developer a bunch of time.

With the type of software many of us deal with - web, business, enterprise, on general purpose cpu’s, we keep quicksand in our hands and try to shape it continuously to meet ever changing expectations.

Lessons learned over and over: Working code works, and, don’t let perfect be the enemy of good.

If you can change it with little effort it’s as close to perfect as it have to be.

Ofc not all domains are the same, but here I am with quicksand in my hands.


One major reason:

All systems we construct are actually at best models of reality. Because the information content of reality is always bigger, the model must always fail at some point due to the fact that information content is proportional to surface area (Bekenstein bound) so all models are trying to represent with less information a reality that contains more information.

This difference means you can never have perfect predictive fidelity in any system. You do not even need to invoke human frailty but even that is the same - the human mind can never model and predict the universe for the same reason so anything we imagine will always have limits and become wrong as prediction and model at some point.


All models are wrong, but some are useful.

It’s impossible to have a correct model, but you can have a model that is valid for its intended purpose. The hard part of engineering is to draw a box around the scope and have everyone agree that’s what is to be done.


Pretty good article and good points. I want to add a few more based on my experience:

* On the topic of systems changing. The "perfect" system is a moving target. Have you ever encountered an entire module that could be replaced by a library call? The plot twist is that library didn't exist 5 years ago when the code was written. Found an ugly hack that's completely unnecessary? that was written to overcome a system limitation that has since been removed. As time passes, how a "perfect" system looks like changes as well.

* On the topic of never the same implementation, we aren't all experts in everything. Being assigned to a project in a tech you have no knowledge whatsoever is a very humbling experience. You'll make tons of mistakes, and code will be far less than ideal, but it's only with time that you'll get to appreciate (and fix) those.

* On the point of Good > Perfect, there is the topic of diminishing returns and leverage. In the previous two points we either can't or don't know how to write better systems. Sometimes we can and know how to do it, but it's better not to. A while back I wrote a service that's central to our infrastructure. I spent a bit of time making sure code is clean and we have good test coverage. It has been in production a while and it has never caused an issue. Sometimes I look at it and see some obvious faults: interfaces could be tidier, or some code could be cleaner. However I force myself to not spend time fixing it. Why? it isn't a high leverage activity. Rather than spending time polishing a service that works well, I should spend that time fixing open bugs or improving parts of the system that don't work as well. Even if it isn't as satisfying to our ego, the return on investment will be much higher.


>I realized that even armed with all the theory and context, the perfect system still remains a mythical creature. In other words - it doesn’t exist.

I wouldn't go so far to say this.

The perfect system does exist we just need to define it formally. This will take a long time and a lot of research but we can get there.

Any time you hear the words "designing systems" it refers to some aspect of reality we don't understand and we go through this "design process" where we attempt to guess and check our way to a very sub optimal solution.

Take for example the distance between two points. The answer is a straight line. We have a formal definition of this axiom. We do not need to design the shortest distance between two points. If we complicate the problem and ask the question what is the best way to travel between two points in the United States... well then the answer gets much more complicated. Do you take a car? Do you take a bus? Do you take a plane? Which one is cheaper? Which one is faster? Which one is better for the environment? All kinds of decisions make it too complicated to calculate a solution so we turn to Design. We use "design" to create systems where no closed form solution yet exists. And in the past decade we've used machine learning as one possible way of finding a solution for these types of problems.

While we have to use designs for building planes and such I do not believe that this will always be the case for programming. I truly believe in a world where it is possible to calculate our program designs. If you really squint... I sort of see a path leading to this world within functional programming and category/type theory.

https://www4.di.uminho.pt/~jno/ps/pdbc.pdf


I don't think we'll ever be able to calculate program designs, like having a computer design the theoretically optimal {sorting algorithm, hashmap, SQL engine} for a given processor, for a particular performance criterion (average-case and worst-case performance), out of the exponentially exploding set of possible type and function definitions. And "the perfect system" is different on a DS, a smartphone, a desktop PC, or a GPU or supercomputer. I think it's orders of magnitude more complex than designing an optimal chess algorithm. But I'll look into the PDF you linked, since I believe "calculating a program design" would be great to have (though I doubt it's possible), and people aiming for it will hopefully create useful tools along the way.

I do think we can and should define a self-consistent internal logic for the program designs we do write (eg. under a given set of valid inputs, the program is is memory-safe, is thread-safe, does not produce UB, has program-specific invariants upheld by each function's preconditions and postconditions, and produces correct results with eg. O(n log n) runtime, making O(n) allocations totaling O(n) bytes ever allocated), then verify that the code we write upholds this logic. Every program has an internal logic, correct programs have self-consistent logic, and any holes in the logic are exposed as bugs in the program. And easily understood code makes this logic self-evident to readers.


>I don't think we'll ever be able to calculate program designs, like having a computer design the theoretically optimal {sorting algorithm, hashmap, SQL engine} for a given processor, for a particular performance criterion (average-case and worst-case performance)

Not talking about efficiency here. The theoretical optimal for time complexity is already done. We know how to calculate a quantitative measure for average case and worst case performance. For many problems we know the solution for how to optimize for both of the above cases.

When I refer to design I refer to the aspect of computer science we do not have theory for. How do you organize your logic? What is optimal organization of code such that it is optimal and future proof? This area I believe can be formally defined.


Then it will turn into a consensus to define it. Probably will turn into C or C++ committees with ever changing goalposts. So in the end it will be back to square one but instead of the orginal issue it will be a layer up finding the perfect definitions. I think my supervisor summed it up best....there is nothing constant accept for change itself.

Nah I'm thinking in terms of mathematics. That's what I mean by formal. Do committees exist that define geometry? No. A formal basis can be established on a very simple foundation. This is not a committee, I am talking about a formation of a mathematical formalism on program organization similar to the mathematical theory on time complexity (Note the lack of committees on time complexity).

The theorems and axioms of geometry are consistent and evident. This is what I'm proposing. We must define the axiomatic notion of optimal program organization. There may be several metrics here but like the axioms of geometry we must pick something foundational. For example: the shortest distance between two points is a straight line is a foundational axiom chosen for Euclidean geometry.

Just like how Geometry, group theory or probability follows from a set of rules and axioms I foresee the possibility of such a thing happening for program organization.

> I think my supervisor summed it up best....there is nothing constant accept for change itself.

Your supervisors mind is clouded by the endless circle of redesign happening throughout the industry. He doesn't think above and beyond that. What is design? and what is optimal? are the questions he should ask. Sure everyone has their own opinion on what is "optimal" but at the same time everyone on the face of the earth agrees on some foundational concepts that optimality encompasses (including the trade offs). Therein is where the axioms of program organization lies. Somewhere within this universal agreement that nobody has really sought out to fully crystallize yet exists the formal theory of program organization.

We were able to formalize our notion of "luck" with probability. Probability is humanities universal agreement on the true nature of "chance" or "luck." Prior to probability luck and chance were fuzzy, qualitative and opinionated concepts that were ripe for formalization.

If we can formalize "luck" then we can do the same with how we organize logic.


I used to think my love of making things orderly and perfect was a boon as a software engineer, where things should be clean and logical - that this was me sticking to high professional standards. But I've come to realize that these OCD tendencies (I realized I had OCD a few years back) are actually a huge setback. I have to actively work to say enough is enough, my text editor, programs, desktop, filesystem layout, etc. is never going to be perfect let alone bug free, and neither will anyone else's code.

Everything is about finding good trade-offs and arriving at the best compromises. Time to market, reliability, scalability, complexity, maintainability, costs, value to customers, and more are just variables you need to put into complex equations to be solved.

Experience helps. Inventivity helps. Trying to do things better than you did before helps. Asking for input helps. Research helps.


This can simply be summed up as: we're human. Humans aren't perfect and neither is the code we write. At best we can try to enforce checks and balances with CI and git hooks but since things always change, you'll come across some point where you need to make something in limited time and you'll have to compromise

Chaos and entropy always get into mechanism, but if we don't even try to get it right at the begining then the system might not even make it to prod.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: