Hacker News new | past | comments | ask | show | jobs | submit login

The problem with "systems design" is the lack of formality and the excessive use of conjecture. Nobody has a problem with Boxes and Arrows if those boxes and arrows have formal definitions (See category theory). The problem arises when those boxes and arrows represent vague unproven concepts with fuzzy meanings.

For something to be more legitimate it needs to be formalized into a fixed set of axioms and formal logical rules. Relativity is an abstract concept but the rules it is based on are formalized. When nothing is formalized then nothing can be formally proven then the whole field descends into "design arguments" where examples and philosophical conjectures are thrown all over the place with no one able to really prove why one design pattern is better than another design pattern.

The problem with this article is he never goes into these formal definitions of what a system is nor does he derive theorems. He just goes into example after example. His "revolutionary" thought here is that systems design may point to some underlying theory of design that applies to the world outside of computers. Big deal.

We've seen these concepts many times but in different forms. "Design patterns," "Architecture" "System Architecture" and the industry just travels in circles around all of these concepts repeating the same old bs time after time never converging on definitive answers. The lack of formalization prevents us from truly knowing what an optimized design is so we just guess.

Let me put it to you this way. Anytime you see the word "Design" it refers to a field where we have little knowledge of how things truly work.

Nobody "designs" the shortest distance from point A to B. That value is calculated through formal theory. However we can't fully understand the best way for a "human" to get from point A, in California to point B in Colorado. We're simply not advanced enough to come up with a formal theory that can optimize the individuals preferences, time, space, energy, speed and comfort. Thus we "design" redundant solutions and hope that with each iteration in "design" we converge closer and closer to an optimal solution. But there's no way we can ever formally know if one solution is "better" than another.

Design is another word for random intuitive guess. It is only deployed for problems where an optimized solution is unknown. "Design" represents an intense lack of knowledge. Anytime you see the word "design" or "architecture" you need to realize that you're descending into a field or talking to an "expert" where nobody knows anything really.

As a result, the entire field of design travels in circles. Most new programming languages or frameworks are a useless attempt at producing an optimum solution but ends up being just another iteration of the endless circle. Is golang the optimum programming language? can we ever truly know if it's better? Are we truly pushing the field forward?

I would say that for languages like typescript where the error surface area is provably smaller then JavaScript or Rust which also has a smaller error surface area then C++ that we have moved forward at getting closer to an optimum solution, but overall for things where a formal theory does not apply (type theory applies in the previous two examples) we aren't really converging on anything.

The field is rife with "fake" formalizations and fake legitimacy. If someone has the job title "System architect" or "System Designer" you know it's complete BS. The guy is just making stuff up from intuition and experience, there is no formal science here. He is much closer to an "artist" then he is to a "mathematician", physicist" or "scientist" and should not command the same respect as the latter titles. The lack of formal legitimacy leaves a lot of room for BS. A good number of people achieve these roles simply by playing politics because there's really no way of knowing if the "architectures" they propose are inherently correct or "better"

The endless circle is very apparent on HN where you just endlessly see new "Design" blog posts on the latest metaphor that can be used for "design." In this case the "design" is "systems design" and the metaphor is the "real world." Why do people endlessly write articles on new design metaphors? Because "design" indicates a lack of so much knowledge that it's really all they can do... make comparisons and metaphors rather then calculate actual optimum solutions.




The guy is just making stuff up from intuition and experience, there is no formal science here. He is much closer to an "artist" then he is to a "mathematician", physicist" or "scientist" and should not command the same respect as the latter titles.

Perhaps some artists should command greater respect among scientists than they do, and perhaps science and engineering are closer to art than they want to admit.

I too used to believe that knowledge was only real if it was formalized, and that intuition is fake and inferior. Now, while formalized knowledge is easier to record and convey, I realize that it is intuition that does the vast majority of the work in most settings. The ability to convey and gradually formalize intuition is indeed a mark of intelligence, but it is not necessary, sufficient or even desirable to wait until all intuition is converted to formalism before acting.

I think you are missing the facts that everything we do is implemented by humans, and that we are working with enough complexity and emergent behavior that we have to rely on intuitive pattern matching. All the formalism in the world means nothing if it doesn't change the way humans implement something, or if the problem has completely changed by the time a formal solution has been created. In my own professional work, as acknowledged by coursework I have had in the past, the bulk of real value has not come from, say, learning the math governing circuits, or learning new programming languages, but from building and applying intuition about systems of interconnected parts.

So if using semi-intuitive, semi-amorphous pattern matching to describe things helps humans deal with complexity sooner and faster than waiting for absolute mathematical formalism, then that is incredibly valuable to society and to the people who develop and apply improved intuitions.


>I too used to believe that knowledge was only real if it was formalized, and that intuition is fake and inferior. Now, while formalized knowledge is easier to record and convey, I realize that it is intuition that does the vast majority of the work in most settings. The ability to convey and gradually formalize intuition is indeed a mark of intelligence, but it is not necessary, sufficient or even desirable to wait until all intuition is converted to formalism before acting.

I never said all knowledge is invalid if it's not formalized. Far from it. I gave an example of a human getting from point A to point B. Clearly we have no choice but to rely on informal solutions when devising a "design" for aspects of out lives that are not open to formal treatment.

Overall My point is two things:

1. Often people don't know the difference between intuitive knowledge and formalized knowledge and attribute way too much legitimacy to "systems design." People think of these guys as scientists when they're really just making stuff up.

2. Informal intuition can as a result devolve into a trap where we endlessly come up with new frameworks, new metaphors and new "designs" without improving anything because there's no formal way of verifying what is optimal. I am arguing that the author of this article and articles like this is just the latest iteration of an endless circle. He introduces nothing new and instead provides a new metaphor for us to ponder about. Literally read it. Did anything change did you actually learn anything new? Was anything actually improved? No. And this blog post is one out of multitudes of "system design" and "system architecture" blog posts that have led to no real insight on anything. It's not just the blog posts either. The post is representative of "systems design" in the entire industry... endless circles of going nowhere.

The best example of the above two points is monolith vs. microservices? Which is actually better? What are the actual tradeoffs here? Nobody really knows, hence the reason why the concepts have oscillated in popularity and the resulting non-answer people come up with is: "Depends on your use-case." And still you get "experts" who pretend to understand this stuff as if it's legitimate "theory."

Let me give you another example. I can make up one of these "theories" and metaphors on the fly while offering zero substance.

You know how the fact that current architectures at most companies involve one giant monolith that does most of the work with a bunch of microservices surrounding one big monolith? It's no longer a choice between either monolith or microservices... everything is a hybrid now with a monolith in the center and satellite microservices orbiting the monolith. I call it the "orbital architecture theorem" which is basically a principle that says that architectures have a tendency to develop centralized and distributed layouts simultaneously in nature. It's similar to the ways planets and corporate organizations work .... blah blah blah. Come on man... you see where I'm going here with this BS? The entire industry is acting this way. It's endless circles of metaphors, BS theorems and laws and architectures that prove or show nothing.

The industry is ripe for formality in this area as we've been going in circles for decades. We are dealing with computers here: virtual idealized worlds ripe for systematic formalism. We don't even need scientific experimentation... just raw mathematical logic as there's no need to put a computer in a wind tunnel to measure unexpected aerodynamic effects.


> the resulting non-answer people come up with is: "Depends on your use-case."

> We are dealing with computers here: virtual idealized worlds ripe for systematic formalism.

I disagree with the premise. Generally, computers are the tool, not the product. The primary focus ought to be on the problem we're solving, not on the particular tool we use to construct a solution.

You're right: design patterns are couched completely in terms of the software domain. Just knowing a handful of design patterns doesn't help you understand when or how to apply them. When you confuse the map for the territory, design patterns look like the tool, rather than a particular example of how the tool is used. No doubt, that's problematic.

But I think you're under a similar illusion: that the computer _is_ the problem to be solved.

The practice of software engineering sits at the nexus of (at least) two distinct domains: computer science and the problem domain, whatever it may be. Software architecture is fundamentally about understanding the shape and structure of the problem domain, understanding the capabilities of your engineering tools, and creating an architecture where both domains support each other. Your overall system architecture should flex where the problem domain flexes, and may be rigid where the problem doman is rigid. (It really does "depend on your use-case".)

This kind of architecture isn't really about the computer. It's about the human developers and maintainers who have to inhabit this space for years, possibly decades. You have to think about the human factors, and understand (as best as possible) the problem domain you'll be working with. (And you have to be willing to tear it down once, finally, the problem has evolved sufficiently away from expectations.)

Architecture is just a disciplne of making your life suck less later. It's not an easy discipline, and (clearly) not a well-understood one. But it's also not formal in the same way computer science or mathematics can be.


> Software architecture is fundamentally about understanding the shape and structure of the problem domain, understanding the capabilities of your engineering tools, and creating an architecture where both domains support each other. Your overall system architecture should flex where the problem domain flexes, and may be rigid where the problem doman is rigid. (It really does "depend on your use-case".)

Can I just say that I really like this framing, I think it succinctly captures why you see such a broad range in languages/frameworks and why there hasn't been one software solution that fits most problems.


Except the framing is bad, in my experience.

For any non-trivial software, the architecture quickly becomes 10% about problem domain, and 90% about internal code bureaucracy. The latter is something that's to a large extent problem-independent, and yet we're still spinning in circles about it for some reason.

(By internal code bureaucracy I mean the art of structuring and connecting all the abstractions in your code, the coding of pathways the data travels and transformations it undergoes, etc. Ever had a situation where a seemingly simple feature required you to touch half of your system, as you routed and transformed a piece of information from a place that produced it to the place that consumed it? That's code bureaucracy.)


It captured nothing in my view.

Even a formal solution won't necessarily have a single solution for all problems.

Likely there are several optimal solutions given for each problem that exists. What formality will tell us is whether or not that solution is definitively optimal.

Right now our definition of an "optimal design" is whoever wins the "design argument" flame war.


I think you'll find this interesting. It's a bit random but in another part of this thread your name appeared out of pure coincidence from another poster:

https://news.ycombinator.com/item?id=25557413

In the github thread (started in 2013) mentioned by pcen, you were part of the big debate on bringing more formalization to javascript promises. You were on the winning side (at the time) of NOT doing this. Your comment here: https://github.com/promises-aplus/promises-spec/issues/94#is...

Likely you muted the thread since then but the conversation continued.

Now in 2020, times have changed and Typed javascript is not the dominant paradigm. However, the decisions made by you and your brethren back in 2013 continue to echo through eternity. The final comments in that thread (dated 2019) sums it up perfectly:

https://github.com/promises-aplus/promises-spec/issues/94#is...

and also:

https://github.com/promises-aplus/promises-spec/issues/94#is...

This github thread was brought up to me as an example about how my Current thread is representative of history repeating the same mistakes. A decision to ignore formal theory in the past now results in a fundamental mistake that all JavaScript developers must live with.

The irony is that you, Jonathan, are part of both histories, you made a mistake then, the question is are you repeating a similar mistake now?

Regardless of whether you're right or wrong now, and disregarding the previous (minor) verbal scuffle we had... I thought you'd be interested in this tidbit of serendipity.


As an addendum, you may like to see what my thoughts are on formalizing the modularization of software systems. As noted elsewhere, I frame it in terms of concurrent systems.

https://news.ycombinator.com/item?id=25567740


Ah, yeah, I reminisce on that thread every now and then. No, I don't have it muted. I was one of the more moderate voices, as I recall. (See my sole subsequent post, where I expressed interest in seeing the approach come to fruition, to help us understand what place it should have within the promises ecosystem. The thread was raging quite loudly by then, and as a much-junior developer, I stayed quiet after that.)

At the time, I had only just started learning Haskell, and while I really liked what monads brought to the table, JavaScript wasn't in a place at the time where it was especially common. With hindsight, monadic promises was perhaps the earliest step toward taking FP seriously in JavaScript.

>> You recall, perhaps, how electromagnetism and the weak force are actually part of the same mechanism, but only at very high temperatures? Well, we simply can't turn up the furnace that high in Javascript. Certain formalisms work beautifully in Haskell but fall flat in Javascript without support from the language. Haskell's static typing lets you offload a ton of work onto the compiler, but Javascript doesn't help much at all!

Leaving aside how embarrassing my demeanor was, I still believe in the core idea here. You want to use a language that doesn't fight what you're trying to do with it.

>> The fmap/bind divide is one of those places; it's just much easier to use the library if you combine the two together.

I strongly disagree with myself here now. This was, as I recall, the crux of the entire debate, and I pretty strongly fall on the side of not auto-flattening since maybe only a year afterward. Yeah, it's "easier", but it's less predictable.

(Even at the time, I was a proponent of making sure `.andThen()` wouldn't run the next step synchronously if the promise was always resolved, because that would make for an edge-case that you'd always have to keep in mind. "Is it resolved yet?". So I clearly had some sense of consistency, but I hadn't had the same realization about map vs. flatMap on promises.)

But back to the present:

> A decision to ignore formal theory in the past now results in a fundamental mistake that all JavaScript developers must live with.

This is why I more or less recused myself from yesterday's thread. I don't appreciate having to tear down a strawman of myself -- I am, and have been for many years now, a proponent of strongly typed, well-structured, indeed formal approaches to software systems. I also see value in an informal, phenomenological stance, because there's no getting around the fleshy humans that have to work with and maintain these systems. I try not to take either to exclusionary extremes.

> The irony is that you, Jonathan, [...]

Everything else aside, it's a bit of a taboo to address someone by a name they are not presently identifying themselves by. It's unearned familiarity.

> [...] the previous (minor) verbal scuffle [...]

For what it's worth, the remark involving "illusion" was intended to be a rhetorical device connecting the example of design patterns to the present topic. It was not meant to be a personal dig. I'll be more careful in the future.


>Everything else aside, it's a bit of a taboo to address someone by a name they are not presently identifying themselves by. It's unearned familiarity.

I mentioned your name to drop a hint at how I confirmed you and the person on the thread were the same person. Should have just been explicit about that. The hint in my mind is more obvious when communication is verbal and face to face but much of it is lost with text. My mistake. Apologies.


I disagree with you and you've made a mistake. You're the one under the illusion. I'll explain.

I know what you're getting at here. You just can't put it in words. And because you can't put it in words, you're unable to see the big picture. To put it plainly you're talking about this:

For a specific context how do we organize and modularize code to maximize reuse-ability for a uncertain future with uncertain requirements?

You think that this problem can't be formalized. And this is basically what you're trying to elucidate into words with your post. It's that simple.

However, this is the exact problem I am saying is ripe for formalization.

This is what I'm talking about and you've missed my point.

Who says we can't formally define what a module is? and who says we can't formally define what it means for a module to be more re-useable? Who says we can't formally define program requirements? From these axioms we can define a calculus that allows us in the best case to calculate the most optimal program and in the worst case at least be able to know whether one design is "better" than another design.

Instead what the industry does is write blog posts about a metaphor. Then give a bunch of examples of why that metaphor is cooler then some other design pattern. Then repeat the same thing every couple of years.

Let me frame it for you so that you can wrap your head around what I'm talking about. Given tic-tac-toe we can formally play the game in a way we can never lose. This is definitely not a design problem and one that can be calculated. Very easy to see why this is the case because of the limited primitives your dealing with in tic-tac-toe.

The "problem domain" defined within a computer is NO different. In computers You have a limited set of primitives in a rule based playing field: assembly language. The objective is not 3 in a row but whatever formal requirements your program has. Within this problem space there is either one or several configurations of assembly instructions that will fulfill that problem domain according to a formal definition of "optimal". That is the spirit of the formalization I'm talking about. A more complex tic-tac-toe problem.

The notion of what it means for an algorithm to be faster has been formalized. So if the problem domain was "how do you sort a list of shoe sizes in the fastest way possible?" then we ALREADY have a formal and definitive way to determine the best possible configurations of assembly instructions to achieve this goal. The problem is solved partially for speed. Picking a faster algorithm is no longer a design choice.

The next step is to formalize the notion of modularity and program organization and bring it out of the fuzzy realm of design and architecture and into a more formal realm where things are exactly defined. We came up with a number for speed (O(N)), who says we can't come up with a number for modularity?

I don't blame you for making this mistake. "Luck" itself use to be a design problem. Intuitively it's very hard to think of the concept of "luck" as a measurement problem. It was utterly incomprehensible for a typical person to even visualize luck as something that can be calculated. It was only after the development of probability theory that people were able to see how it can be done, it is much harder to predict formal possibilities of a topic before the formalization has actually been done.

It's the same thing for module organization. "Design." so to speak

>Architecture is just a disciplne of making your life suck less later. It's not an easy discipline, and (clearly) not a

You can't even define what it means to make you life "suck less." Are you talking about algorithmic speed? Because that's been formalized. Are you talking about less bugs? Because smaller error surface areas have been partially formalized with type theory and Rusts borrow checker...

Are you talking about organization of modules? Because if you're talking about a formal way to organize modules, well that hasn't been done. But don't let that limit your mind into thinking it can't be done.

Especially don't let that limit your mind into falling under the illusion of thinking that all of these endless circles and design philosophies that go in vogue and out of vogue throughout the industry are actually doing something to improve the software as a whole. If you ever wanted a good example of history repeating itself software design is the perfect example.

In simple terms: the discipline of "architecture" is not well understood because it has NOT been formalized. At this point it's borderline fraudulent to even call it a discipline. Call it a career title, sure, but don't try to think of this stuff as anything on the same level as the sciences.


You keep talking about “optimal,” but algorithms and data are only good for optimizing, not for defining what you mean by “optimal” in the first place. Logic and engineering can achieve goals, but they can’t help you choose what goals to achieve.

The real world is big and incomprehensible, and we all have to choose optimization goals based on what limited understanding we have of the big picture - also known as the “system.” There is no math formula you’ll ever be able to find that can do that for you.

Optimizing a known goal, that’s the straightforward kind of engineering, the kind I said in the article maxes out at Senior Engineer. Figuring out what goal to optimize for is messy and uncomfortable, but it’s the only way to eventually solve the bigger problems of the world.

When I was in grade school, math was my favourite subject, because the answers were always right or wrong and you never had to worry if the teacher would disagree with you. It was comforting. It still is. But to grow up and have a bigger impact, we have to move on and learn about uncomfortable things too.

I can tell this bothers you a lot. It used to bother me a lot too. The reason I write articles like this is to explain the things I used to not understand, using words that I hope would have helped past-me understand sooner. Maybe it will help you too. If not, no harm done.


> I can tell this bothers you a lot. It used to bother me a lot too. The reason I write articles like this is to explain the things I used to not understand, using words that I hope would have helped past-me understand sooner. Maybe it will help you too. If not, no harm done.

Thanks for writing this. I too can relate because these things used to bother me too. I like your math being the favorite subject anecdote because it's one I can relate to, and coaching colleagues and reports past the fear of looking like or even admitting that they're wrong has been a much more significant part of my career than I probably ever would have expected. The funny thing is that if you had told me that at the part of my career where it bothered me, I would have not listened. I had to stumble through it to realize everyone (including people like you) who warned me about this was right. I almost wonder if it's a rite of passage for engineers.


Do you still have trouble admitting you're wrong? Your last post to me was a complete troll and the moderator killed your post because of how vile and rude it was. Do you resort to insults when you can't admit your wrong? You certainly did to me.

> I almost wonder if it's a rite of passage for engineers.

Its flawed to think of the world as a reflection of yourself. It's definitely not a rite of passage for engineers. This is more of a rite of passage for you and your fear of being wrong.

Math to me was the same as any other subject. I never had an issue with being wrong or right. If anything english was my best subject. My math skills developed later in university. I have a greater talent for formal math then I do for the applied math they they teach in highschool. But this has nothing to do with any fear of being wrong. I like formal math because I find the philosophical implications of the subject interesting. I derive no other comfort from it and definitely can't relate to your or the parent posters ability to derive comfort from the exactness of the answers. It's just puzzles and solutions I can't derive "comfort" from that anymore then I can derive comfort from a rock.

I think both you and the author of this original article are making misplaced judgements on character. Don't assume that others think like you have the same qualities as you or have the same weaknesses as you. Everytime you make this assumption you're actually revealing more about yourself to others.

Your last post to me. The one that got killed by the moderator was strange. I was sort of curious where all that made up stuff came from. Then I realized none of it was made up. It's just a reflection of your own horrible life. For that I'm truly sorry and I hope you can find a way out of your miserable circumstances.


>not for defining what you mean by “optimal”

This is my point. The path to formalization is finding the exact formal definition of the notion of what we intuitively understand as "good software design patterns." For algorithm speed, we use Big Oh, for other aspects of design, I'm saying rather then create more software design metaphors a more productive use of our time is to formalize and converge on an optimum.

>The real world is big and incomprehensible, and we all have to choose optimization goals based on what limited understanding we have of the big picture - also known as the “system.” There is no math formula you’ll ever be able to find that can do that for you.

Yes except the computer system is an idealization of the the real world. It places the real world into a logical world of formalisms where we can eschew science and use pure logic to draw conclusions.

>Figuring out what goal to optimize for is messy and uncomfortable

Sure. But clearly that task is separate from design patterns. 1. The business "designs" an objective and a goal. 2. The software engineer meets that goal. I am talking about 2. not 1.

>When I was in grade school, math was my favourite subject, because the answers were always right or wrong and you never had to worry if the teacher would disagree with you. It was comforting. It still is. But to grow up and have a bigger impact, we have to move on and learn about uncomfortable things too.

I fail to see how discomfort or growing up have anything to do with the topic at hand. The goal is to converge on the most correct and definitive answer possible.

>I can tell this bothers you a lot. It used to bother me a lot too. The reason I write articles like this is to explain the things I used to not understand, using words that I hope would have helped past-me understand sooner. Maybe it will help you too. If not, no harm done.

It doesn't bother me at all. It sounds like some sort of anxiety disorder if the fuzziness of certain answers bothered you. Human brains are neural nets more similar to the probabilistic (aka fuzzy) outputs produced by our machine learning models hence most people should be more comfortable with fuzzy answers rather then pure logic. Either way, I am simply arguing a point. Your empathy is appreciated but unaccepted, this is not the goal. Again, the goal is to debate a point and find a correct answer.


> Your empathy is appreciated but unaccepted, this is not the goal. Again, the goal is to debate a point and find a correct answer.

Ironically but very relevantly to this conversation, it seems we disagree on the goal. :)


Well I'm not here to talk about my character. I'm only interested in my point. If you want to talk about me, well I can only tell you I'm not interested and neither is anyone else reading this thread. It's sort of against the spirit here on HN.


> The "problem domain" defined within a computer is NO different.

yes, you are correct, a specified problem domain within an implementation can be formalized, proved correct etc.

But that literally has nothing to do with the actual problem, which as Twisol said is about mapping the problem domain and the software domain so that both are supported.

Both hardware and software have moved in well known cycles. Processing has moved from CPUs to external processors (IO, then GPUs etc) and back again as hardware capabilities change. When comms are slow, its better to have two smart ends so that you minimize the information that needs to flow. When comms are fast, you can make one end "dumber".

Your entire premise is at a different level of abstraction than "software architecture" or "systems design". You haven't proposed a mechanism by which messy human problems can be formalized.

Software architecture and design is about dealing with non-deterministic, non-linear, fractal environments and the computer and systems science hasn't provided the theoretical frameworks that would allow the environments to be formalized so that they can be "engineered".

It's definitely not "science", either is it "engineering", but it is definitely a discipline. You are judging one field by the axioms of another and then declaring it invalid because it doesn't comply.


>It's definitely not "science", either is it "engineering", but it is definitely a discipline. You are judging one field by the axioms of another and then declaring it invalid because it doesn't comply.

It's a discipline the same way modern art is a discipline.

>But that literally has nothing to do with the actual problem, which as Twisol said is about mapping the problem domain and the software domain so that both are supported.

I explained to Twisol, that I am talking about the "problem domain." Both domains actually go hand in hand.

Think about it this way. Number theory is a formalization of numbers. A Math problem is in the problem domain that utilizes the formalization of numbers as a method for solving.

>Both hardware and software have moved in well known cycles. Processing has moved from CPUs to external processors (IO, then GPUs etc) and back again as hardware capabilities change. When comms are slow, its better to have two smart ends so that you minimize the information that needs to flow. When comms are fast, you can make one end "dumber".

So your point? I'm not getting what you mean by slow comms, and smart ends and dumb ends.

>Your entire premise is at a different level of abstraction than "software architecture" or "systems design". You haven't proposed a mechanism by which messy human problems can be formalized.

No it's not. It encompasses all levels. Have you ever used formal math to solve a "math problem" let's say accounting. Balancing the books of your finances. This is a "messy human problem" that uses a formalization of something else for solving.

>Software architecture and design is about dealing with non-deterministic, non-linear, fractal environments and the computer and systems science hasn't provided the theoretical frameworks that would allow the environments to be formalized so that they can be "engineered".

wrong. software is mostly deterministic. However, even a non-deterministic model is amenable to theory. Ever heard of probability? Additionally non-linear systems are just not as easily analyzable by certain methods such as calculus. You can still form a proof around a set of non-linear assembly instructions.

>allow the environments to be formalized so that they can be "engineered".

And that's my entire point. There's no point in iterating over the same endless design cycles and repeating the same mistakes over and over again every couple of years with a new framework that doesn't necessarily make anything better. Better to develop the formalism around "design" and optimize it. Another article like the one the parent poster posted is pointless and doesn't move the needle forward At all.

>It's definitely not "science", either is it "engineering", but it is definitely a discipline. You are judging one field by the axioms of another and then declaring it invalid because it doesn't comply.

No I'm looking at the field and seeing that we're going nowhere with endless conjectures and analogies about design. I'm seeing the patterns of today aren't that much different then the patterns from the past. I'm also seeing the software being called "engineering" while eschewing much of the formalism that is part of engineering.

Due to all this I'm saying, that the field is going nowhere and is pretty much a big sham. I'm tired of the pointless flat circle of repeating history. "Systems Design" from the software perspective is therefore ripe for formalization and theory.


I think you're dismissing a critical consideration:

There are systems that are impractical to formalize, either because they are too complex, too poorly-understood, too dynamic, too ephemeral, or too expensive to do so.

Your complaints about the words "architecture" and "design" sound like a cached rant. I agree that there's sometimes an uncomfortable amount of squishiness in the way we build technology-systems, but I don't think that's always a bad thing! By and large, we are not specialists in our sciences, we are generalists working in the emergent systems created by the interactions of formal components. For many, this is the exciting part.

We borrow heavily (terminology and patterns) from building, manufacturing, various artistic disciplines, and from complex systems theory. This is inevitable of course -- but when I think of "systems design", I think more of Donella Meadows (Thinking in Systems) than Morris Mano (Computer Systems Architecture, Digital Design) ... though both are brilliant and responsible for hundreds of hours of my university career! (And the latter is a good example of why your objections to terminology are dead ends.)

Our industry builds squishy complex systems (products, applications, interfaces, software, firmware, some hardware) on top of rigid formalized primitives (algorithms, other hardware, physics). The area we inhabit is fertile ground for iteration, agility, and yes, charlatanism. I can understand why that might be bothersome, but I think it goes back to the old vitality-volatility relationship -- can't have one without the other. You may prefer a different part of the curve of course.

I have friends who graduated with Civil Engineering degrees, and work in their industry. Their projects are extremely formal and take decades to realize. This is appropriate!

I have other friends with Architecture degrees (the NAAB, Strength of Materials kind of architecture), who work in their industry. Their projects are a mix of formal and informal processes, and they take years to realize. This is also great.

Now obviously there's a lot of self-selection involved in these groups, but even still everyone has their set of frustrations within their industry. We in technology can iterate in an hour, or a few days for a new prototype PCB rev. This gives the industry amazing abilities, and I would never trade!


Except I never dismissed this point. The problem with your post is that you assume I categorize design as something useless. I have not. Obviously there are tons and tons of things within the universe where the only possible solution is design. I am not arguing against this.

I am talking about a very specific aspect of the usage of "design" within software. I am specifically complaining about the endless iterations of exposes on software design patterns and architecture. The trends where history continuously repeats itself with FP becoming popular OOP becoming more popular than FP becoming popular again. What about the whole microservices/monoliths argument where monoliths started out more popular than microservices became popular and now monoliths are coming back into vogue again? Endless loops where nobody knows what is the optimal solution for specific contexts.

These are all methods of software organization AND my point is that THIS specific aspect of design is ripe for formalization especially with the endless deluge of metaphor drenched pointless exposes on "design" that are inundating the HN front page feed. It's obviously an endless circle of history repeating itself. I am proposing a way to break the loop for a specific aspect of software by pointing out the distinction between "design" and "formal theory." We all know the loop exists because people confuse the two concepts and fail to actually even know what to do to optimize something.

System architects are artisans not scientists and they will as a result suffer from the exact same pointless shifts in artistic styles/trends decade after decade and year after year as their artisan peers do. Totally ok for styles to shift, but our goal in software is to converge at an optimum as well and that's not currently happening in terms of software patterns and design architecture.

The path out of this limbo is to definitively identify the method for optimization formally, not add to the teeming millions of articles talking about software design metaphors.


Any idea on how to formalize this area? Is anyone even trying to do that?


Personally, I'm nursing a thesis that the study of concurrency is fertile ground for a formalization of modular design. Where parallelism is the optimization of a software system by running parts of it simultaneously, concurrency has much more to do with the assumptions held by individual parts of the program, and how knowledge is communicated between them. Parallelism requires understanding these facets insofar as the assumptions need to be protected from foreign action -- or insfar as we try to reduce the need for those assumptions in the first place -- but I expect that concurrency goes much further.

Concurrent constraint programming is a nifty approach in this vein -- it builds on a logic programming foundation where knowledge only increases monotonically, and replaces get/set on registers with ask/tell on lattice-valued cells. LVars is a related (but much more recent) approach.

A different approach, "session types", works at the type system level. Both ends of a half-duplex (i.e. turn-taking) channel have compatible (dual) signatures, such that one side may send when the other side may receive. Not everything can be modeled with half-duplex communications, but the ideas are pretty useful to keep in mind.

I try to keep my software systems as functional as possible (where "functional" here means "no explicit state"). But there are always places where it makes sense to think in terms of state, and so I try to model that state monotonically whenever possible. At least subjectively, it's usually a lot simpler (and easier to follow) than unrestricted state.

(Note, of course, that local variables are local in the truest sense: other programmatic agents cannot make assumptions about them or change them. Short-lived, local state is as good as functional non-state in most cases.)


> I try to keep my software systems as functional as possible (where "functional" here means "no explicit state"). But there are always places where it makes sense to think in terms of state, and so I try to model that state monotonically whenever possible. At least subjectively, it's usually a lot simpler (and easier to follow) than unrestricted state.

Agreed. You mention LVars so I'm curious what you think about MVars and STM in general. I've always been fond of STM because relational databases and their transactions are a familiar and well understood concept historically used by the industry to keep state sane and maintain data integrity. SQLite is great, but having something that's even closer the core language or standard library is even better.

It's part of why I like using SQL to do the heavy lifting when possible. I like that SQL is a purely functional language that naturally structures state mutations as transactions through the write-ahead log protocol. My flavor of choice (Postgres) makes different levels of efficient read and write available through read isolation levels that can give me up to ACID consistency without having to reinvent the wheel with my read and write semantics. If I structure my data model keys, relations and constraints properly, I get a production strength implementation with a lot of the nice properties you talk about. And that's regardless of my service layer choice for my language that I can trust to stand up.

There's one exception in particular that I've seen begin to gain steam in the industry which I think is interesting, and that's Elixir. Because Elixir wraps around Erlang's venerable OTP (and distributed database mnesia), users can build on the top of something that's already solved a lot of the hard distributed systems problems in the wild in a very challenging use case (telecom switches). Of course, mnesia has its own issues so most of the folks I know using Elixir are using it with Phoenix + SQL. They seem to like it, but I worry about ecosystem collapse risk with any transpiled language -- no one wants to see another CoffeeScript.


I'm not especially familiar with either MVars or STM, so you'll have to make do with my first impressions...

MVars seem most useful for a token-passing / half-duplex form of communication between modules. I've implemented something very similar, in Java, when using threads for coroutines. (Alas, but Project Loom has not landed yet.) They don't seem to add a whole lot over a mutable cell paired with a binary semaphore. Probably the most valuable aspect is that you're forced to think about how you want your modules to coordinate, rather than starting with uncontrolled state and adding concurrency control after the fact.

STM seems very ambitious, but I struggle to imagine how to build systems using STM as a primary tool. Despite its advantages, it still feels like a low-level primitive. Once I leave a transaction, if I read from the database, there's no guarantee that what I knew before is true anymore. I still have to think about what the scope of a transaction ought to be.

Moreover, I get the impression that STM transactions are meant to be linearizable [1], which is a very strong consistency requirement. In particular, there are questions about determinism: if I have two simultaneous transactions, one of them must commit "first", before the other, and that choice is not only arbitrary, the program can evolve totally differently depending on that choice.

There are some situations where this "competitive concurrency" is desirable, but I think most of the time, we want concurrency for the sake of modularity and efficiency, not as a source of nondeterminism. When using any concurrency primitive that allows nondeterminism, if you don't want that behavior, you have to very carefully avoid it. As such, I'm most (and mostly) interested in models of concurrency that guarantee deterministic behavior.

Both LVars and logic programming are founded on monotonic updates to a database. Monotonicity guarantees that if you "knew" something before, you "know" it forever -- there's nothing that can be done to invalidate knowledge you've obtained. This aspect isn't present in most other approaches to concurrency, be it STM or locks.

The CALM theorem [2] is a beautiful, relatively recent result identifying consistency of distributed systems with logical monotonicity, and I think the most significant fruits of CALM are yet to come. Here's hoping for a resurgence in logic programming research!

> There's one exception in particular that I've seen begin to gain steam in the industry which I think is interesting, and that's Elixir.

I've not used Elixir, but I very badly want to. It (and Erlang) has a very pleasant "functional core, imperative shell" flavor to it, and its "imperative shell" is like none other I've seen before.

[1] https://jepsen.io/consistency/models/linearizable

[2] https://rise.cs.berkeley.edu/blog/an-overview-of-the-calm-th...


There's many topics in this area. Ones that are well known in industry are algorithmic complexity theory and type theory. Ones that are less well known include the two resources below.

http://www4.di.uminho.pt/~jno/ps/pdbc.pdf

https://softwarefoundations.cis.upenn.edu

I suggest you get use to ML style languages before diving into those two resources (Haskell is a good choice) as it's not easy to learn this stuff and I think it's also part of the reason why it hasn't been so popular in industry.

The first resource builds towards a prolog like programming style where you feed the computer a specification and the computer produces a program that fits the specification.

The second resource involves utilizing a language with a type checker so powerful that the compiler can fully prove your program correct outside of just types.

Both are far away from the ideal that the industry is searching for but in terms of optimizing and formalizing design these two resources are examples of the right approach to improving software design.

I haven't found anything specifically on the organization of software modules so as far as I know none exists. But given the wide scope of software research I'm sure at least one paper has talked about this concept.


> I haven't found anything specifically on the organization of software modules so as far as I know none exists.

Parnas (1971) is seminal on this topic. https://apps.dtic.mil/sti/pdfs/AD0773837.pdf


Sure this paper introduces the concept of modules in software back when modules were non-existent. As far as I know, there's nothing on the optimal way to organize (keyword) modules.


> Sure this paper introduces the concept of modules in software back when modules were non-existent.

No... that's not correct. The very first non-clerical page quotes from a textbook discussing modular systems, and Parnas himself notes a distinct lack of material on how to actually organize and break down the system into modules.

The paper is literally called "On the criteria to be used in decomposing systems into modules."

It is prudent to at least read the preexisting material if you are going to dismiss it.


My mistake, I skimmed it and saw assembly language and I assumed it was that really early seminal paper that introduced modules to the programming world. Obviously my guess was wrong.


> Your complaints about the words "architecture" and "design" sound like a cached rant.

To be completely fair, my notes were similarly cached, and it was a little poor of me not to respond more directly to the substance of the original post.


Mine wasn't. He's just making that up.


> I disagree with you and you've made a mistake. You're the one under the illusion. I'll explain.

This is a very effective way to make the other party in a debate not wish to continue. I respect that you have strong opinions, but I'm not sure it's worth being right in a population of one.

In all sincerity, I'll leave you with wishes for a restful end of year.


To be fair. You said that I was under an illusion first. That definitely pissed me off a bit. I just subtly returned the favor and forgot about your indiscretion as it's not a big deal to me. It obviously is to you.

You're free to leave and continue at your own discretion.


> but don't try to think of this stuff as anything on the same level as the sciences.

Depends on what kind of sciences we're talking about. For an enormous span of the sciences, serious reproduction and funding crises have emerged to place whole fields on the verge of if not the middle of conceptual and practical collapse (hence the continuing brain-drain from science into tech).

At a deeper level, I think you're making a lot of assumptions that don't seem rooted in reality. Worse still, you don't seem to feel any need to root them in reality -- you minimize and diminish uncomfortably sharp edges in a very whiggish kind of historiography. Why is that? Your conjecture is that architecture is not well understood because it is not formalized. What if you're merely conflating cause and effect, and software architecture has not been formalized because it is not well understood? Then formalization is merely a memento of something else -- something far more primal, something far more deeper. And I think (though only you can say) that you might be afraid of that.

When we learn how to use powerful tools, we want to use them for everything. That includes philosophical ones. What if you're taking it for granted that architecture will /ever/ reach the level of formalization as the hard sciences? After all, there is something a bit hubristic about that. Wouldn't it be ironic to have such a deep faith in formalization so as to ignore (and even invert) the fundamentally unidirectional flow from phenomena to formal abstraction? But that seems to be what you're doing. After all, you have no proof that formalism, as a tool in and of itself, lends any utility to making sense from and building structures for the world.

It is the observations, the data accumulated in enough curious detail, kept preserved through painstaking work via the investigator and archiver, which provide material for abstraction to compact into greater expression: succinctness, clarity, and semantic power. But without that raw material, the abstraction is nothing but window dressing. It is onanistic and serves no formal purpose. And frankly, software as a field is very young by human standards. For it to not have the level of formal abstractions far older human pursuits have is a function of its youth. I think that to love formal abstraction for its own sake is an obscurantism of its own, and this kind of thing has a name: it's called scientism.

Bertrand Russell tried to axiomatize mathematics into logic, and completely failed. And not because he was unintelligent. He failed because the opposite premise was vindicated by reality. What will you do if the same holds true for you here? Even he was so taken and seduced with the elegant possibility of axiomatizing mathematics that he was unable to see the logistical (and finally logically provable) impossibility of doing so. And finally even he had to ultimately give up his endeavor as directly fruitless (although the silver lining was that his work did pave the way for a lot of crucial discoveries). Consider whether you're doing the same thing.


Aside from the context of this thread, you've really eloquently put into words something I've started to believe over the last few years. Please never delete this comment ;) I'm sure to bookmark it.

> It is the observations, the data accumulated in enough curious detail, kept preserved through painstaking work via the investigator and archiver, which provide material for abstraction to compact into greater expression: succinctness, clarity, and semantic power. But without that raw material, the abstraction is nothing but window dressing.

yesssss.


>At a deeper level, I think you're making a lot of assumptions that don't seem rooted in reality. Worse still, you don't seem to feel any need to root them in reality -- you minimize and diminish uncomfortably sharp edges in a very whiggish kind of historiography. Why is that?

What the hell is whiggish historiography? You want me to to come up with some textbook historical account of how software design has moved in circles? I assume it's obvious and it's just generally hard to write about a trend that defies exact categorization. It probably can be done, I just can't spare the effort.

As for your other stuff I respectfully request that you keep this argument formal rather then comment on my personal character. Nobody appreciates another person speaking about their personal character in a negative and demeaning way. You are doing exactly this and there's really no point or need unless your goal is set me off personally and have this argument degrade into something where we both talk about each other personally.

>What if you're merely conflating cause and effect, and software architecture has not been formalized because it is not well understood? Then formalization is merely a memento of something else -- something far more primal, something far more deeper. And I think (though only you can say) that you might be afraid of that.

If it's a paradox then your reasoning could have existing evidence. Are there any attempts at formalizing software organization in academia? Have those attempts been ignored or have they actually failed?

Either way, in the industry, it's pretty clear to me that a lot of energy is spent discussing, debating and coming up with design analogies year after year. In my opinion this is actually a misguided attempt at optimization. People think the latest software design analogy or architecture pattern is going to save the world but it's always just a step to the side rather then forward and that's because most people in the industry don't even know what it means to truly "step forward."

> What if you're taking it for granted that architecture will /ever/ reach the level of formalization as the hard sciences?

Well I'm obviously betting that it can. Either way there's no denying that given a formal definition of a problem, several or a single best optimum solution does exist. In other words There must exist a configuration of assembly instructions that fits a general definition of best solution to a problem. Whether a formal method other than brute force can help us find or identify this solution remains to be seen, but the existence of a best solution is enough for me to "predict" that a formal method for finding this thing exists.

>It is the observations, the data accumulated in enough curious detail, kept preserved through painstaking work via the investigator and archiver, which provide material for abstraction to compact into greater expression: succinctness, clarity, and semantic power. But without that raw material, the abstraction is nothing but window dressing. It is onanistic and serves no formal purpose.

You're conflating science and logic. Science is about observing reality and coming to conclusions based off of observations and the assumption that logic and probability are real. It involves a lot of statistics.

Logic is just a game with rules. We create some axioms and some formal rules and play a game where we come up with theorems. The game exists apart from reality. We build computers on the assumption that logic applies to reality. But more importantly we use computers on this assumption as well.

Thus the computer is in fact a tool designed to facilitate this logic game. The computer is in itself a simulator of a simple game involving formal rules with assembly language terms as the axioms.

That's why formality applies to software and we can disregard science for a good portion of computing.

Scientism has nothing to do with this. You are at an extreme point of misunderstanding here.

>Bertrand Russell tried to axiomatize mathematics into logic, and completely failed. And not because he was unintelligent. He failed because the opposite premise was vindicated by reality.

No he failed because he assumed that logical systems must be consistent and complete. But others have succeeded in saying that logical systems can exist within our logical games, so long as they do not hold the above two properties of consistency and completeness at the same time.

Also the logical premise you are speaking of was not vindicated by reality (aka scientific observations). It was vindicated by additional formal logical analysis by another logician. Reality and logic are two separate things. The main connection in those two areas is that science assumes logic and probability are part of reality. Outside of that assumption the two have zero relation.


> Thus the computer is in fact a tool designed to facilitate this logic game. The computer is in itself a simulator of a simple game involving formal rules with assembly language terms as the axioms.

It's somewhat poetic, beautiful and accidentally funny that your argument frays at the same blind spots that your tone does, which is at the gap between theoretical and on-the-ground meaning. It is true in a theoretical sense that you could describe a computer as a simulator which can be modeled with formal rules. But it tells me nothing about why the computer as a mass-market electronic device which runs mass-market software occupies the societal role it does, nor where it will go, nor why it was an innovation which transformationally altered society in the same way as the flame, the wheel, the chariot, the bow, the written word, the loom, the printing press.

Now, if you're not going to try and engage with any of that because "It probably can be done, I just can't spare the effort" what do you think is more likely:

1) your complaint most loudly voices new and fresh insight to the industry which it had previously missed

2) your complaint most loudly voices your own gaps in curiosity and understanding

If it's 2, then I just think that's sad. I see a lot of engineers (frequently early-stage in their careers and suffering from a bit of imposter syndrome) get stuck in a rut where they dogmatically devour formalism as a vehicle to elevate the level and quality of their engineering to the level of rigor seen in the sciences. But the appropriation of surface level aesthetics masks a methodological inability to do the dirty work to get the actual job done via "less sexy" paths (as reality often requires), to differentiate and harden critical sections when necessary. It's sad because it generally only speaks to their own lack of exposure with just how power and impact software can be built to have, and it takes the momentary weakness of inexperience and calcifies it into eternal inexperience and terminal juniority owing to an unrealistic, radioactive hubris far out of proportion with actual capability.

That could be you. It doesn't have to be.


> The main connection in those two areas is that science assumes logic and probability are part of reality.

That's definitely not the case with respect to science, which is empirical and not rationalist. What you're describing in that sentence is Platonic realism, not science.


It definitely is the case.

Science involves statistics, statistics is built on top of of the theory of probability, probability is built on top of logic.

Therefore, for science to utilize statistics, it must assume probability and logic is true.

https://en.wikipedia.org/wiki/Empiricism

Examine the quotation in the first section: "Empiricism in the philosophy of science emphasises evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation. "

The key word here is "solely" meaning that empiricism involves the addition of observational evidence on top of other forms of analysis.

Additionally take this sentence: "Empiricism, often used by natural scientists, says that "knowledge is based on experience" and that "knowledge is tentative and probabilistic, subject to continued revision and falsification"

Note that falsification used here cannot exist without assuming logic is true.


> it must assume probability and logic is true.

True in what sense? I appreciate the rigor you're trying to bring to this discussion. Most of the points regarding logic you raise here and in your other replies on the thread have been comprehensively debunked in Kant's Critique of Pure Reason.

One omission is the nature of human senses and perceptions. The understanding of which eliminates whatsoever any possibility of conclusively distinguishing the statement "logic is true" from "logic appears true to us but it's actual truth status is unknowable". This is just a toy explanation and the Critique is far more comprehensive and rigorous.

> empiricism involves the addition of observational evidence on top of other forms of analysis.

That may be the case in circumstances where it's convenient and other forms of analysis serve the function of useful tools or mental aids. However empirical observation always trumps other forms of analysis when they're in conflict. Otherwise we're no longer discussing science anymore.


>"logic is true" from "logic appears true to us but it's actual truth status is unknowable".

That's just a paradox. No point in going to deep into the paradox as it's unresolvable. At best it can be "Assumed" that logic is true. Replace "assume" with "pretend" it's the same thing. Science pretends logic is true. Whether it's actually true or not is the paradox.

>That may be the case in circumstances where it's convenient and other forms of analysis serve the function of useful tools or mental aids. However empirical observation always trumps other forms of analysis when they're in conflict. Otherwise we're no longer discussing science anymore.

Sure but this can only be done in conjunction with logic. If you observe evidence and the evidence leads to a conclusion the "leading to a conclusion" is done through logic.

Another way to put it is, if logic wasn't real no observation would make sense. If I observed a pig, then pigs exist by logic. If logic wasn't real then the observation doesn't imply pigs exist. By logical analysis, An observation is therefore useless without logic.

Suffice to say at this level of analysis we can clearly conclude that Science "pretends" logic and probability are true. Getting deeper into this dives into paradoxes which are ultimately uninteresting dead ends to me because it's unresolvable.


> At best it can be "Assumed" that logic is true.

What exactly gives you the right to do that? And why is your reason any better than a different one provided by someone else who wants to argue the opposite?

> Replace "assume" with "pretend" it's the same thing.

We've now turned science into dogma. At what point do we then need to become aware that we're pretending? Surely there comes a point where pretending costs us epistemic legitimacy. Where is that point? And what response do we offer to an interlocutor who insists the earth is flat and our logical deduction of it's spherical shape is "pretend"?

> Sure but this can only be done in conjunction with logic. If you observe evidence and the evidence leads to a conclusion the "leading to a conclusion" is done through logic.

> If I observed a pig, then pigs exist by logic. If logic wasn't real then the observation doesn't imply pigs exist.

Both of these statements are nonsensical and are debunked in the Critique of Pure Reason.

> Getting deeper into this dives into paradoxes which are ultimately uninteresting dead ends to me because it's unresolvable.

I have a feeling we're dealing with a small bit of motivated reasoning with respect to interestingness here.


>What exactly gives you the right to do that? And why is your reason any better than a different one provided by someone else who wants to argue the opposite?

Why use science if it won't work without assuming the principles it's built on are not true? We assume science is true, we established that logic cannot be established to be true. Thus if logic cannot be established to be true, then science cannot be established to be true, then why do we use science?

The only other conclusion is we "assume" science is true and therefore "assume" logic is true even though we can't truly know if it's true.

>We've now turned science into dogma. At what point do we then need to become aware that we're pretending? Surely there comes a point where pretending costs us epistemic legitimacy. Where is that point? And what response do we offer to an interlocutor who insists the earth is flat and our logical deduction of it's spherical shape is "pretend"?

I'm not a philosopher. I'm not into epistemology as I'm not even entirely sure what it is. SO if you dive into that world too deeply the argument is over because I can't argue with something I don't know about. Either you explain your points in layman terms or the argument can't proceed very far because I won't be able to understand you.

I'm just saying that "pretending" is the same thing as "assuming" We don't actually know if something is true, but we still use science as if it's true. The contradiction is what allows us to use the word "pretend" we know that it cannot be known yet we act as if it is known. Hence "pretend"

>Both of these statements are nonsensical and are debunked in the Critique of Pure Reason.

Well declaring a statement nonsensical doesn't mean anything to me without you explaining the reasoning behind your declaration. Citing a book won't really do anything for me because I haven't read the book. We're at a dead end here. Obviously I won't read the book because it's too long to read right now and obviously you won't explain the book for the same reason, so for this point the argument is over... we reached an impasse and can only agree to disagree unless you decide to explain the book to me.

>I have a feeling we're dealing with a small bit of motivated reasoning with respect to interestingness here.

I'm interested up to a point. If the point is a paradox I'm not interested in exploring the paradox. If that's the direction you're taking your argument then it's an impasse. Either way we're just debating nomenclature here.


> The industry is ripe for formality in this area as we've been going in circles for decades. We are dealing with computers here: virtual idealized worlds ripe for systematic formalism.

The "industry" is not about computers, it is about the use of computers to solve people's needs. You're confusing the engineering with the need.

Sure, people wrap up stupid or simple ideas in layers of abstraction, usually because they want to sell consulting.

But this article was more about looking beyond the boundaries of simple software development, needing to understand the context of where these systems operate.

Essentially, your argument is proving the early part of the article, where the author is comparing how some juniors are excellent "engineers" but can't move up to higher levels of abstraction, vs others that might not deal well with individual details, but have a better understanding of how the systems they develop need to interact with the rest of the world.


>The "industry" is not about computers, it is about the use of computers to solve people's needs. You're confusing the engineering with the need.

No I'm not. Every "need" can be formalized into a specification.

>Sure, people wrap up stupid or simple ideas in layers of abstraction, usually because they want to sell consulting.

So?

>But this article was more about looking beyond the boundaries of simple software development, needing to understand the context of where these systems operate.

And I'm saying this article is just listing examples and making analogies. It doesn't formally define what a system is and it doesn't talk about any ways to use this theory to optimize a system nor does it try to define what optimize is. It's just one out of a million articles that tries to talk about "design" and create an analogy or metaphor out of it while teaching the reader absolutely nothing new.

>Essentially, your argument is proving the early part of the article, where the author is comparing how some juniors are excellent "engineers" but can't move up to higher levels of abstraction, vs others that might not deal well with individual details, but have a better understanding of how the systems they develop need to interact with the rest of the world.

Yeah, I didn't care for his argument, he's free to make that analogy. Think what you want on how I "proved" it... The meaning of the metaphor itself is irrelevant to my topic, it's the fact that the metaphor exists and basically introduces nothing new to the concept of design that is my part of my complaint.


I cannot agree more. There is way too much B.S. in the field of software and it's a joke that we get away with calling ourselves "software engineers" or "computer scientists" for how little analytical decision making or scientific method applies.


> monolith vs. microservices? Which is actually better? ... the resulting non-answer people come up with is: "Depends on your use-case."

But that's the truth of the matter. _Neither_ is "actually better" without qualifications.


Prove this fact formally for each qualification. Resolve the debate once and for all. You can't, which is my point.


> You can't,

I don't need to.


Even mathematicians are not all-knowing. There are lots of theorems that we suspect to be true that have never been proven to be true. In fact, Godel's incompleteness theorems state that there will always be statements that are true, but can never be proven to be true.

Fields should be as mathy as they can be, but no mathier. Otherwise it's just fake formalism, as you said yourself.

> Let me put it to you this way. Anytime you see the word "Design" it refers to a field where we have little knowledge of how things truly work.

You are setting up a false dichotomy between knowing everything (which nobody does, certainly not scientists or mathematicians), and knowing nothing.

There are plenty of fields with "design" in the name that are quite rigorous: analog circuit design, digital signal processor design, microprocessor design, compiler design, etc.


>Godel's incompleteness theorems state that there will always be statements that are true, but can never be proven to be true.

Not exactly. It says that incompleteness is always true for axioms of a consistent system. It does not have to hold for an inconsistent system of axioms.

>Fields should be as mathy as they can be, but no mathier. Otherwise it's just fake formalism, as you said yourself.

If I used the words fake formalism, i'm likely referring to the usage of big words and titles to falsley promote a sense of legitimacy. Like "System architect"

Fields should be as mathy as we can make them as developing a formal language around any concept aids in exact analysis.

>There are plenty of fields with "design" in the name that are quite rigorous: analog circuit design, digital signal processor design, microprocessor design, compiler design, etc.

Yeah but all of these fields involve intuitive guesses. Given specific requirements can you derive via calculation the optimal circuit design/digital signal processor design/microprocessor design/ compiler design?

These constructs are "designs" rather then calculations and they, as a result, reflect a lack of knowledge on how to calculate the best possible "design."


> Anytime you see the word "Design" it refers to a field where we have little knowledge of how things truly work.

There are branches of science whose theories are more often derived from data and observation than from axioms. Just because you may not be able to break something down into first principles, does not mean that the knowledge is useless.

Design, too, can be data driven. UX and UI design can be analyzed using A/B tests, or by observing patterns of user behaviour.

It might be true that much of systems design is based on anecdotal evidence and intuition, but I don't think that's enough of a reason to ignore the field of design entirely.

For example, the concept of abstraction in software design may be based primarily on the intuition that human beings are bad at holding too much complexity in their minds. But any software developer who has written more than one program will agree that abstraction is crucial to good design.


>There are branches of science whose theories are more often derived from data and observation than from axioms. Just because you may not be able to break something down into first principles, does not mean that the knowledge is useless.

Yes science is different from logic. Programming functions happens in a limited axiomatic world that simulates logic. This makes computer science a bright target for mathematical formalization. This is entirely different from science.

>It might be true that much of systems design is based on anecdotal evidence and intuition, but I don't think that's enough of a reason to ignore the field of design entirely.

I never said ignore the field. Often we have no choice. No one calculate the best work of art. Art is created by design.

>For example, the concept of abstraction in software design may be based primarily on the intuition that human beings are bad at holding too much complexity in their minds. But any software developer who has written more than one program will agree that abstraction is crucial to good design.

The concept of abstraction, good abstractions and bad abstractions can be separated from design and formalized into exact definitions. That is my argument.

The reasoning behind why a human would want to do that is irrelevant.


> This makes computer science a bright target for mathematical formalization. This is entirely different from science.

Yes, but until computer science is fully formalized we will still need design, and can benefit from science. If/when it is fully formalized and creating software becomes an automate-able optimization problem, we will no longer need system designers. Or software developers, for that matter.

> I never said ignore the field.

Not explicitly, perhaps, but it really does read like that is what you're implying. You said multiple times that anyone labelled as an architect or designer knows nothing and is peddling bullshit. If we can agree that design is amenable to scientific inquiry, then it would make sense that some designers do know things.

Re-reading your original post, I realize now that I chose the wrong quote to respond to. I do agree with you that anything labeled as "design" is necessarily constrained by a lack of knowledge. Any time there are multiple ways to solve the same problem and there is no a priori way to figure out which solution is the best, we are forced to design. My point is that this describes software development, which as noted above has not been fully formalized. Writing software is design, and therefore needs designers.

> Often we have no choice. No one calculate the best work of art. Art is created by design.

Are you implying that when it comes to software, we do have a choice?

> The concept of abstraction, good abstractions and bad abstractions can be separated from design and formalized into exact definitions. That is my argument.

This is where you lose me, I'm not sure I understand what you mean here. Abstraction is a design principle, and developers argue constantly about whether a given abstraction is good or necessary. The motivation behind abstraction as a principle hinges on how you define "too complex", and that sounds very subjective to me—the opposite of formal.


>This is where you lose me, I'm not sure I understand what you mean here. Abstraction is a design principle, and developers argue constantly about whether a given abstraction is good or necessary. The motivation behind abstraction as a principle hinges on how you define "too complex", and that sounds very subjective to me—the opposite of formal.

It's subjective because it still exists in the realm of design. Once we formalize these notions the definitions become more clear. The key is formalization of fuzzy words. The definition of complexity is subjective yet there is a shared definition that we all agree on hence we won't be able to communicate. The key is pinpoint the exact shared metric that causes us to consider one piece of code more complex than another piece of code. Not an easy task. Formalization is very much a deep dive into our internal and linguistic psychology.

Take for example "luck." The concept of luck was formalized into a whole mathematical field called probability. Again not an easy task but doable for even fuzzy concepts like luck.

>Not explicitly, perhaps, but it really does read like that is what you're implying. You said multiple times that anyone labelled as an architect or designer knows nothing and is peddling bullshit. If we can agree that design is amenable to scientific inquiry, then it would make sense that some designers do know things.

Maybe a better way to put it is like this: Many design principles are bullshit simply because we don't know whether two opposing design principles are better or worse. There's a lot of rules of thumb that happen to work but there's a lot of stuff that's pure conjecture and unproven and even stuff that doesn't actually work. For example OOP was previously the defacto way of programming, now it's a highly questioned as a methodology. It brings all the "experts" who promoted it as the one true way into question.

Additionally if you meet someone with the title "Architect" a better title for them is "Technical Manager" because that's what they actually are. The title "Architect" implies that they have specialized formal knowledge when they in fact are usually just managers with more experience. Really that's the only difference, any typical engineer, holding all other things equal has pretty much the exact same informal knowledge that an architect has, after all it's all informal anyway.

>Are you implying that when it comes to software, we do have a choice?

I'm saying what you already know. We do have a choice to move software in the direction of formalized methods for things labeled as "design" no such choice exists for art.


> Programming functions happens in a limited axiomatic world that simulates logic.

You sure you're not mixing up "programming functions" with "powerpoint presentations" here?


Dude whats up with that comment. Are you mocking me?

No. I'm talking about how a computer is basically a logic simulator. You don't need to use empirical methods to prove things in a logic simulator, you just use logic.


You're free to ignore Conway's law[1] and the like at your own risk.

I would argue that if software or organizations were straightforward to apply such rigor that you'd see something closer to hardware where formal verification plays a much larger role(to be fair there's also the aspect of hardware/ASIC respin costs here as well). As a byproduct I think you see much less autonomy or even diversity of job roles in that space because a good portions of the problems are considered "solved".

This is not a knock against hardware(it's hard!) but more that the design space of large scale software projects involve a diverse set of teams and people so you see similar patterns arise.

[1] https://en.wikipedia.org/wiki/Conway%27s_law


You are making the assumption that the actual science of Computer Science has anything to do with "Software Engineering". The formal theories of CS, eg proof of correctness etc, have very little to do with day to day programming. The only example I know of true software "engineering" was the NASA Space Shuttle software team.

But they had the benefit of a fixed platform and total control of the software environment. Attempts to implement similar levels of "engineering" (eg CMMI etc) always fail unless they have similar constraints.

Software development is not "engineering" in the classic sense, where there are a set of formalized approaches, acceptance criteria etc to specific areas of implementation.

People wrap things up in "process" to try to present a picture of "engineering" but it's a Potemkin village.

On the other hand, software "architects" are doing what building architects do. They apply heuristic approaches, hopefully using patterns that have shown to be successful. The reason architectual patterns appealed to programmers is because both fields are not engineering.

The trouble is that many software architects are focussed on the low level engineering. Architects that draw "C4" diagrams that go lower than maybe 2 levels aren't being architects.

Design doesn't travel in circles, it learns from experience. It understands that there needs to be constraints, otherwise the problem to be solved is unbounded. It understands that the system or product has to interact with its users and that the interaction needs to be as seamless as possible, so affordances are vital.

Don't judge a professional field ("design") by empirical science standards.

Design is not engineering.


>The trouble is that many software architects are focussed on the low level engineering. Architects that draw "C4" diagrams that go lower than maybe 2 levels aren't being architects.

I would argue software architects aren't doing any engineering. There doing something similar to art. Drawing and painting. The main problem is that these big words like "architect" deceptively paint a picture as if they're doing anything other than art.

>Design doesn't travel in circles, it learns from experience. It understands that there needs to be constraints, otherwise the problem to be solved is unbounded. It understands that the system or product has to interact with its users and that the interaction needs to be as seamless as possible, so affordances are vital.

Much of it does travel in circles because what happens is people encounter two differing designs but they can't really know which design is better. Additionally it doesn't necessarily always improve because nobody can truly define improvement.

Take the monoliths vs. microservices argument. It's cycled away from monoliths and back. Take FP and OOP, it's also cycled from FP to OOP and back to FP again. Or what about the newest framework flavor of the week in javascript that tries to improve on all the faults of the front end but ends up being a step to the side? These circles are everywhere.

>Don't judge a professional field ("design") by empirical science standards.

I judge it by whether it's improved or not. It has not. Technology shifts but it doesn't shift in the direction of improvement. Many shifts are just horizontal genetic drift.

>Design is not engineering.

"Engineering is the use of scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings.[1] The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. See glossary of engineering."

keyword: "design"

source : https://en.wikipedia.org/wiki/Engineering


I understand the core of your rant is "the lack of formalization prevents us from truly knowing what an optimized design is so we just guess." You would prefer society would work on developing better formal theories instead of ... what? Doing business, solving concrete problems, and building stuff?

I believe you can do a rigorous design/architecture in theory. In practice, we cannot handle the complexity and uncertainty. Usually at some point you have a human as part of your system to close some feedback loop and then you are discussing fuzzy topics like psychology.

You use programming languages as examples and claim that Rust is "closer to an optimum solution." Most of the other replies are about our inability to agree what that optimum should be. Rust certainly loses in terms of ecosystem maturity to C++, so it is further away from the optimum in that aspect and thus many people rationally decide not to use it.

I agree with you that design and architecture is a lot about intuition and gut feeling and should be more formal. However, we don't have to time to wait for the formal theory. Foods needs to put on the tables, so income needs to be generated, so business needs to be done, so decisions with incomplete information and lacking formal theory must be made by biased humans. Unfortunately, the complex and wicked problems are usually the more important ones. If you pick C++ instead of Rust, the organization will probably survive anyways. If you pick the wrong technology or person it can kill a company.


>discussing fuzzy topics like psychology.

Psychology is fuzzy, but the study is quite actually formal in terms of empiricism. You may be thinking about psychotherapy? Not sure, and also not familiar with psychotherapy so take it with a grain of salt.

I'm not invalidating design and all things without rigor. I'm invalidating specific trends where we end up going in circles because of lack of rigor even though it doesn't need to be that way. Software design is one such area.

>I agree with you that design and architecture is a lot about intuition and gut feeling and should be more formal. However, we don't have to time to wait for the formal theory. Foods needs to put on the tables, so income needs to be generated, so business needs to be done, so decisions with incomplete information and lacking formal theory must be made by biased humans. Unfortunately, the complex and wicked problems are usually the more important ones. If you pick C++ instead of Rust, the organization will probably survive anyways. If you pick the wrong technology or person it can kill a company.

Sure agreed, I never said otherwise. I'm more remarking about the evolution of the industry. How much of this decade was an improvement over the previous decade? How much of it was a repetition of the same historical mistakes made again and again? I am proposing that formalism should be used to break out of the loop. I am not proposing that you use formalism to do your job. At least not yet. Think of it as where should the Boss put his funds for R&D? Formalism is it.


If you prefer formalism, the cybernetics literature should appeal. I prefer the Systems Dynamics approach, because it's formal enough to avoid woo, but informal enough to avoid avoidance.

But it's a mistake to create the general conclusion, from this blogpost, that a program aiming to describe, catalogue and comprehend phenomena across many fields in common terms is a fool's errand. It's been an ongoing program of research for most of a century now.


Can you suggest a good introductory book for Systems Dynamics?

The last time the topic came up, I read the popular one from Donella Meadows but I found it too shallow [0]. Now I'm considering "An Introduction to General Systems Thinking" because it was written a computer scientist, Gerald Weinberg.

Also, is there any free tool to play around with the modelling of feedback loops? LOOPY [1] is too simple quickly.

[0] http://beza1e1.tuxen.de/thinking_in_systems.html

[1] https://ncase.me/loopy/


Ok, saw "Sterman's Business Dynamics" in the other comment. :)

Any good tool suggestions?


Sheetless looks promising: https://sheetless.io/

There are also the classic tools like Stella, but they're expensive.


I found https://insightmaker.com/ which looks quite capable and polished.


>But it's a mistake to create the general conclusion, from this blogpost, that a program aiming to describe, catalogue and comprehend phenomena across many fields in common terms is a fool's errand. It's been an ongoing program of research for most of a century now.

It's not a fools errand but the way this article and many others approach this task is a fools errand.

If he thinks colloquial use of "systems design" applies to various fields outside of computing, then he should formally define what he means by a "system" then prove his point. We don't need another blog post about some "design" metaphor.


I had a slightly different complaint from yours, which is that the author doesn't seem conversant with the kinda-sorta-formal fields of study that already exist.

Take for example the "chicken-egg" problem. Economists study this under "multi-sided markets" (which the author touches on, but late in the example), as part of the study of path dependency. Systems dynamics researchers have framed the economics work in their own terms of stocks and flows, but the essential structure is the same and can be reduced to equations. It's mostly calculus, sometimes there's some linear algebra.

The best book in my view is still Sterman's Business Dynamics, which is much broader in scope than the title suggests (it's a play on earlier book titles). There will hopefully be a second edition in the next year or two.

Edit: I should note however that when I see "system" I pattern match on what I know best. But there's a whole discipline of "System Engineering" which comes mostly from defence, aeronautical and astronautical fields. It has a high focus on formalisms to try to govern immensely complex technological efforts.


Formalism as you describe it is an abstraction that we invented to help us model stuff. It's not more true, or more accurate, or strictly better than any other abstraction.

The fact that design and systems and law and economics and philosophy and etc. etc. are ultimately squishy and informal is actually a reflection of the truth. Down there at the bottom of everything? It's humans. Subjective, emotional, informal humans. That's the baseline. Use formalism where it helps, absolutely, but don't trick yourself into believing it's a better way of being, of doing. It's not.


It is an abstraction, that much is true. But formal languages are built on top of logic. The formal abstractions force us to stay true to logic and that is why they are effective.

Hence the term "formal language" over just "language." All language is an abstraction over something. The term "formal" indicates a deep connection with logic. What this means is that formal language is developed with a set of axioms, so as long as the axioms hold true the rest of the language does as well.

The same cannot be said for other abstractions.

Too put it plainly... formalism is abstraction, but not all abstractions are formal, don't trick yourself into believing otherwise.

>Down there at the bottom of everything? It's humans. Subjective, emotional, informal humans.

The baseline is not humans. We are one part of the universe not the center. Logic is the baseline. Logic is the thing universal to not just all humans, but all living beings and all things. This fact has been repeatedly observed to be true and therefore one is more wise to hold logic as a universal truth rather than a anthropocentric view of the universe.


> The baseline is not humans. We are one part of the universe not the center. Logic is the baseline.

Logic is not the baseline of human constructs.


> Let me put it to you this way. Anytime you see the word "Design" it refers to a field where we have little knowledge of how things truly work.

http://www.anft.net/f-14/f14-history-f14a.htm

13 matches for "design".


And all 13 parts of that aircraft that was "designed" they cannot know whether or not that design was the best optimal design possible. It's just a best guess, because they lack the ability to calculate the optimal configuration for that aspect of the aircraft, hence why they turned to "design" instead. That is for sure a lack of knowledge and is exactly what I'm referring to.


This is good and represents the dilemma faced by engineers - should I make shit up or preserve the integrity we have. I agree with most of your argument, but have to say that what you say mostly applies to shitty designers. There are good designers too who consistently do good work




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: