Hacker News new | past | comments | ask | show | jobs | submit login
We must seek a widely-applicable science of systems (hiranmay.xyz)
81 points by hdarshane 5 months ago | hide | past | favorite | 67 comments



Having studied complexity from a computational perspective (via Santa Fe Institute and 1st wave Cybernetics) and a natural sciences one (via Dave Snowden and Alicia Juarrero) my preference is to stay away from modelling complex systems particularly complex adaptive ones. There is value in modelling, but heed the advice that all models are wrong. If you want to understand why, take a look at Steven Wolfram's Computational Irreducibility and Dave Snowden's Cynefin framework.

As for your interest in self-assembly and emergence I would highly recommend Alicia Juarrero's Dynamics in Action and Context Changes Everything - they are both tapping biological sciences to update and better inform our views of the world in deeply meaningful ways. The former changes our notion of cause-and-effect as the driving force in complex systems, stepping away from the Newtonian billiard ball frame. The latter expands on it talking about how constraints underpin the actions and dynamics in complex systems.

I'd agree that I'd love to see some convergence eventually in the complexity sciences world - but it is a new science relatively speaking - the divergence is a positive property in my opinion!

Keep up the energy, keep writing and keep researching! I enjoyed your post, it reminded me of the excitement I have for the field as a whole and the thirst I had for very similar questions! I wouldn't of guessed you were a 16yr old had you not stated it. Be prepared to have fundamental views changed and get comfortable with uncertainty!


There's value in studying this stuff rigorously even if you can't predict the exact future state of complex systems. Being able to model whether a system tends towards equilibrium or disequilibrium is enormously valuable.

In economics, George Soros's theory of reflexivity, for example, is a rejection of efficient market hypothesis. The idea here being that price signals can lead to second order effects and market disequilibrium.

In ecology/climate, it's very useful to understand what kinds of perturbations (introduction of cane toads to Australia) are more likely to break equilibrium.

In fluid dynamics, we still don't really understand turbulence, but we can do useful modelling in wind tunnels without grokking the fundamental principles.

As the world becomes increasingly interconnected it becomes even more important to get a rigorous understanding of this science. We might not get to the power of Harry Seldon's psychohistory anytime soon but there's useful value we can gain along the way.


Statistical Mechanics is another good example here. As I understand it, it's what allows much of the understanding of the very early universe despite us being unable to model it explicitly or observe it directly.


> In fluid dynamics, we still don't really understand turbulence,

Not to be pedantic - isn't this mathematically well described?

You seem to have broad knowledge, am appealing to your insight :)


The regime change from laminar to turbulent flow is chaotic, so looking at a system in bulk will give you a decent description of the system, looking at any one part of the system will give you a random answer. This becomes problematic at systems boundaries, especially in engineering where we would like to understand the safety factor. Simulation can give you a reasonable starting place, but at the end of the day you still need test articles to see how it performs in reality.

For example, would you consider the three body problem well described? Even if it is, the solution goes chaotic rather quickly.


Gotcha, thank you, appreciate the insight


The field of cybernetics covers 100% of this context

The fact that nobody considers cybernetics an actual functional study is an unmitigated tragedy - based on politics - that needs to be fixed immediately

Wiener, etc. laid the whole thing out, gave a very compelling and clear way to approach questions and complexity, and we are just as a field completely ignoring it to our own detriment with this idea that somebody needs to invent it


Any recommendations for learning it?


My personal approach has been trying to find a general Markov decision process solution to any feeback loop process I can identify.

I first encountered this as the Scientific method where there is an experimental hypothesis testing loop the defines are epistemology

That was intersectional with computers for me trying to learn path finding algorithms in early video game development that I was working on as a early teenager

I read about John Boyd and the OODA loop Which got me interested in process control systems - defined in a mechanical way While still in high school. This led me to the Air Force and studying economics and trying to apply the concept of a processing loop to solve generalized decision-making

Because at the time economics had just turned into a statistical science and cognitive science was new, my economics study really became behavioral economics and so I began to study how humans actually do the “loop” as experimental data production via actuating our effectors and sensors into a prediction network. So my research started becoming around how do we model infant learning in computing architecture and prediction for human action - Frank Guerin and Ben Goertzel became my loose mentors for this direction

It was at that point that I really started to understand the Markov decision process and I got very deep into starting to learn reinforcement learning in the tradition of Richard Sutton starting around 2008.

If you go deep enough into generalizable reinforcement learning agent structures, you wind up in cybernetics because you have an agent, environment, predictive modeling, and physical systems requirement that is best defined in the way that cybernetics allows us to define them.


I recommend:

- Designing Freedom by Stafford Beer

- Brain of the Firm by Stafford Beer

- Thinking in Systems by Donella Meadows


Exactly. All human systems are sociotechnical, and people within systems respond differently when they know that they're being monitored, and even more differently when they know that the monitors know that they know they're being monitored.

Exact models are somewhere between impossible and implausibly expensive for a non-trivial system. Approximate or die.


What’s not exact about a human brain…? Complex, sure. But I don’t see the difference in kind you allude to


It does not help you having a perfect and exact rule from initial conditions to future prediction... if you cannot have a complete and exact description of your initial conditions.

We might never be able to get this exact description of the internal state of a brain, this is what makes perfect prediction impossible.


As it turns out, pretty much everything. Individual neurons are imprecise and adaptive and they form networks that show chaotic behavior.

Basically I'd turn the question around - what makes you think brains are exact?


Thanks a lot! SO many interesting recommendations in your comment. Yes, I agree that divergence is a very positive property right now, it might also allow for even more unexpected cross-pollinations and comparative studies as the pool of studies gets bigger.

Yes, I think I agree that just general non-determinism within such systems makes it impossible to "model" them. But, I believe regardless of how stochastic the behaviors are, there are certain properties that the systems might be optimising for. Long way to go for all of us.

Thanks again!


What do you suggest we do instead of modeling complex systems…? Isn’t that kinda an important part of aerospace, psychology, and meteorology, just to name three random things? Likely misinterpreting; I doubt anyone is anti-modeling in general!


To be clear I didn't say "don't model" - models can have value but all models are wrong because the map is not the territory. That's a first order problem, any representation of something is not the thing itself and bears no requirement to continuously and accurately represent it.

Another first order problem is you must chose from all the data that is perceivable what is pertinent to model because you can't perceive and model it all (at least not in complex systems) - that 'is perceivable' and 'choosing what is pertinent' act as filters that rarely are questioned. How do you know that 'what is pertinent' hasn't changed if your only way to reason about the system is through the model itself? How do you know you aren't perceiving something more relevant? Designing models to be sensitive to what we mean and not what we've stated is a very hard problem. The state space of reality is far bigger than we can sense and store.

A second order problem lies in human propensity for low energy states, when given a model or a metric we'll champion it as the truth or the way because it is easier than facing the complexity - but complex systems are crafty they adapt continuously. For example, your boss wants 100% test coverage and a dashboard has been created to report it to them... Fine, we'll test the getters and setters, we'll modify the dashboard code to ignore certain files, we'll write pointless tests that just exercise code but make no assertions... etc. They likely won't check provided the dashboard keeps reporting what they want to see.

Another second order problem is in complex adaptive systems the act of measurement changes the behaviour of the system itself. We know this intuitively, pull out a camera and start recording some strangers who were going about their business.

As for "what should we do instead" - complexity is all about context dependence... so it depends? If you are in a complex adaptive system, get involved! You can't know it all - so have fun with it.


I feel like any such unified theory of complexity would approach the complexity of unified field theory.

Of course you would have to model it's complexity to know with any certainty.


My PhD thesis was about the extension of smooth functions. In the 1960s, ideas in this area were studied under the umbrella of ideals of smooth functions, and had applications to catastrophe theory. So I spent some time reading about this topic. If you want a reading recommendation, the book "Differentiable Germs and Catastrophes" I remember being good, as well as "Stable Mappings and their Singularities".

I bring this up because I believe that at the time catastrophe theory was seen as a "widely-applicable science of systems". Or at least some practitioners tried to sell it as such. This point of view eventually soured to the detriment of catastrophe theory, which cleared out. I don't think this was a good thing: catastrophe theory (the study of the singularities of smooth maps and their consequences to dynamical systems) is an interesting topic with many remaining open questions. But it was seen as cringe that people were, e.g., using Whitney's classification of the generic singularities of planar maps to try to say something about predator/prey dynamics or whatever. Any claim about applications of catastrophe theory was infected with this stink, and so people lost interest in it.


Thank you for your very interesting comment and the reading recommendation.

I agree with your general sentiment that chasing "wide applicability" or trying to force a narrative that xyz theory will explain xyz might be hugely detrimental.

I agree my post and many discussions about complex systems, specifically one in an evangelic-type light might be over-optimistic.

We definitely must approach all work on such a theory with careful attempts not to overhype it. My post was an attempt to lay out some interesting possibilities.

We must remain optimistic anyway but I will be more careful in this regard going forward. Thanks again.


I feel like the real problem here is the way that research has to be sold in order to get funding (I assume the same was true in the 1960s). There is no inherent harm in trying to find a universalizing science, but market imperatives drive the creation of intellectual hypecycles with the associated booms and busts. That can then discredit entire fields.


I wonder if you're looking for cybernetics? The original meaning from the 1940s, not the corrupted science fiction shibboleth. Wikipedia - "The initial focus of cybernetics was on parallels between regulatory feedback processes in biological and technological systems...Wiener introduced the neologism cybernetics to denote the study of 'teleological mechanisms'."

Wikipedia suggests it lost a lot of drive in part because AI and computer science split off from it.


Yes. I'm aware. I think cybernetics has definitely drifted away from what the original scope was, also I think modern interpretations of cybernetics-ish ideas and complex system studies are slowly also incorporating social sciences and economics and so on.

Interesting to note that if you look for journals on cybernetics, most papers are closer to EE, Deep Learning and some telecommunications here and there, if that constitutes as a good metric of how much the semantic meaning has shifted from its original identity.


If you haven't already I highly recommend reading "Introduction to Cybernetics" by Ashby, it's got the nitty-gritty formalization that later "Second Generation" et. al. cybernetics drifted away from.

You can get a free PDF of the book here: http://pcp.vub.ac.be/ASHBBOOK.html

> This publication has been made possible largely through Mick Ashby, the author's grandson, who has convinced the copyright holders (the Ashby estate) that they should allow us to produce an electronic version.


While I agree that nitty gritty formalizations are important to grasp, it's been made clear that the formal sciences were often insufficient in facilitating insight on the type of complexity cyberneticians cared about -- especially that of process oriented circular causality, which often involved paradox and self-reference.

See Stuart Umpleby's lectures on the History of Cybernetics: https://www.youtube.com/playlist?list=PLB81F4FC0EDC4DECC

Or Walter Tydecks on the cultural understanding of mathematics as a sign system: http://www.tydecks.info/online/themen_e_spencer_brown_logik....

> Shannon was like Spencer-Brown a mathematician and electrical engineer. In his study of data transmissions, he has demonstrated how any medium generates background noise that interferes with the transmitted characters. To this day, mathematics has not perceived or not wanted to perceive the elementary consequences of this for mathematics and logic. To this day, mathematics is regarded as a teaching that is independent of the medium in which it is written and through which it is transmitted. Nobody can imagine that the medium could have an influence on the signs and their statements. Mathematics is regarded as a teaching that is developed in a basically motionless mind.

Having taken the last 2 years to really go through the discourse surrounding second order cybernetics beyond Ashby's introduction and Stafford Beer, I learned of a pivotal text called Laws of Form, which was at the heart of second order cybernetics. The formal system was directly incorporated in Varela and Maturana's thesis of autopoiesis, and Niklas Luhmann was also /obsessed/ with Laws of Form for much of his academic career. This is a progenitor of our current interest of enaction and embodied cognition!

With the book's 50th anniversary in 2019, the discourse has been seeing a rejuvenation thanks to some small conferences at https://lof50.com. I've seen some intriguing applications. Some are a bit far out, but that's the nature of systems thinkers, yeah?

A couple that may be of interest:

William Bricken's work on Iconic Mathematics, a system that covers K-12 math and bridges it to purely physical manipulation, shedding matters of complexity difficult that fuel general mathphobia such as: associativity, commutativity, division by 0, bases, functions, order of operations, the disambiguated meaning of equality. Instead, everything is a /structure/.

Bricken's work on computational implementations of Iconic Logic. One example includes a novel SAT algorithm / tautology verification algorithm called Virtual Insertion, which makes extensive use of the notion of semipermeable boundaries, in which the context of a boundary still pervades its content.

Gitta and Ralf Peyn on FORMWELT, a yet-released system aiming to facilitate precise, clear communication of nebulous natural language concepts through the use of injunction and self reference.


Wow. This is a really informative comment.

>While I agree that nitty gritty formalizations are important to grasp, it's been made clear that the formal sciences were often insufficient in facilitating insight on the type of complexity cyberneticians cared about -- especially that of process oriented circular causality, which often involved paradox and self-reference.

I wholeheartedly agree. Hence why I think we are dealing with an entirely new type of science, the basic principles and theorems are yet to be discovered.

Lots of interesting pointers and links throughout. Thanks again!


I would say that the first-order Cybernetics (as exemplified IMO in Ashby's book) is all about symbolic formalism for "circular loops in causality", and that Second-Order Cybernetics is a species of mysticism (I hasten to add that I don't mean that in a derogatory way. I'm a Mystic myself.)

You could imagine a spectrum from logic to cybernetics to philosophy to mysticism. It's all fine, just I believe that it's important to be clear where on the spectrum you're working. If you want to build machines that do things use Cybernetics and feedback/control theory, if you want to grok reality and self eat a mushroom and read "Gödel, Escher, Bach" or Tao Te Ching.

In re: "Laws of Form", yeah, he identifies the boundary or distinction between the mystic realm (non-form, non-distinct, non-dual) and symbolic logic, and then builds a lovely binary Boolean logic directly off of that. It's a tour de force.

The really interesting thing is that George Spencer-Brown figured out how to deal with circular logical systems by introducing the concept of imaginary Boolean values.

It's a formal system for symbolic logic, and you can indeed build a lovely and efficient SAT solver with it using Bricken's Basis. E.g.: https://ariadne.systems/pub/~sforman/Thun/notebooks/Correcet...

(For reference, here's the LoF/Bricken formalism)

    Arithmetic

    (()) =
    ()() = ()

    Calculus

    A((B)) = AB
    A() = ()
    A(AB) = A(B)
(That third rule in the calculus, discovered by Bricken, is the kicker!)

cf. "The Markable Mark" George Burnett-Stuart http://www.markability.net/ GBS (not GSB, they're two different people) has managed to extend the system to Predicate Logic as well!

William Bricken's home page: https://wbricken.com/

His Iconic Math Page (a wonderland!) https://iconicmath.com/

To sum up, we need symbolic formalism to communicate and build machines, otherwise we're just sort of reading poetry to each other, which can still be helpful, but in a different way than building machines. (And when I say machines in the context of Cybernetics I mean to include those made out of people! "Human use of Human Beings", eh?)

As an aside, there is a fascinating talk and book "My Stroke of Insight: A Brain Scientistʼs Personal Journey" by Dr. Jill Bolte Taylor:

> In it, she tells of her experience in 1996 of having a stroke in her left hemisphere and how the human brain creates our perception of reality and includes tips about how Dr. Taylor rebuilt her own brain from the inside out.

https://en.wikipedia.org/wiki/My_Stroke_of_Insight

https://youtu.be/UyyjU8fzEYU (TED Talk)

When "Laws of Form" talks about the beginning of logic from the pre-logical non-distinct or non-distinguished realm, the "mark" that divides the world into A and Not-A, it turns out to be quite literal, purely biological: the brain does it. When the part of her brain that "does logic" was rendered inoperative due to the stoke she was having, she reports an inability to tell herself from the world around her accompanied by oceanic bliss...

Mysticism is real, you just can't talk about it. The very first chapter of the Tao Te Ching says it right off: the Tao that can be talked about is not the Tao.


To me, Christopher Alexander's work comes closest to describing "systems science". I recommend Notes on the Synthesis of Form, The Timeless Way of Building, and then skipping to the Nature of Order series.

I'm of the opinion right now that what we call "design" and "architecture" is really just the science of finding stable habitable zones in high-dimensional problem spaces.

What's cool about Alexander's work is that he makes a great case that this stuff is objective phenomena that can be studied!

I'm planning to write much more about Christopher Alexander on my own blog in the future, but meanwhile I can recommend Dorian Taylor's excellent works:

https://the.natureof.software

https://doriantaylor.com

I gave a talk on this subject at DDD Europe this year, so keep your eyes out for "Timeless Way of Software - Taylor Troesh" on their YouTube channel :)

https://www.youtube.com/@ddd_eu


Interesting! Have heard about Christopher Alexander's work, but have never really jumped in. Maybe I should do that now.

>I'm of the opinion right now that what we call "design" and "architecture" is really just the science of finding stable habitable zones in high-dimensional problem spaces.

Wow! Yes! Agree with this view that all design and organisation is mostly just the most optimal/favorable state for the entire system to be in. What constitutes as favorability might be low free energy, high interconnect, distributedness etc.

May I suggest you to look into the work of Jeremy England in a similar light of self-assembly and optimisation in non-equilibrium states? Some really really interesting takeaways there, me sharing some of my interpretations might constitute as epistemic noise as I'm not sure if I understand each bit of it completely well at a 100%.

There was a great article about him in Quanta, and you might want to check out his talk at Karolinska Institutet.

Thanks for the recommendations, and I'll look out for your talk!


I strongly recommend picking up Carliss Baldwin’s Design Rules. It is a great addition to the thinking done in Notes on the Synthesis of Form, in a more empirical and specific technological context (the advent of the computer).


Without knowing much about what you reference, this remembers me of the second law of thermodynamics [1], which coined entropy as a general concept for understanding many phenomena.

https://en.m.wikipedia.org/wiki/Second_law_of_thermodynamics


Yeah, entropy definitely has been included in some forms of thought that are very correlated with complex systems. Schrodinger early on had a very interesting insight on life and entropy, people like Jeremy England are taking that view forward. Work by England, Crooks, etc. very beautifully relates entropy and the probability of any state x existing more than all others.

The information theory counterpart of entropy seems extremely relevant in describing coordination failures, some forms of stochasticness that aren't necessarily derived from lots of molecules with high degrees of freedoms interacting together. Also might hold high explanatory power in describing why trickle-down/bottom-up and top-down effects are slowly negated and diluted - although I believe this is fuzzy thinking and we need a better tool than just entropy to understand this.

Thanks!


Somewhat related, I feel like the book Thinking in Systems should be taught in high school

https://www.goodreads.com/book/show/3828902


Absolutely. Broadly, I wonder what leads to schools not adopting emerging fields as part of the formal curriculum. Interesting to note even in 2021, only 51% US k12 high schools were found to have a CS course. Does not seem like a capital problem to me. Is this just inertia or a legibility problem?


I'm inclined to say it's mostly inertia.

As with any monopoly, the incentives for public school administrators are all out of wack. Adding a CS curricula takes real time and effort (lost summer vacation time, effort required to convince the board/PTA, picking a curriculum, hiring teachers for an unfamiliar topic). It brings with it real risks and headaches (budget issues, vulnerability/ignorance in a new domain, possible failure/embarrassment, board/PTA conflict, dissatisfied students/parents). Meanwhile the benefits are not tangible and the cost of not implementing a new CS curriculum is zero.

For public school administrators (as with all process owners) it's far easier to simply repeat what they did last year.


Oof yeah. The legibility-gained per effort put in for most admins working in a system that inherently incentivises tangibility and observable "utility" (whatever that may be in this case) reduces any hope of seeing much change.

Maybe this is another good problem that Systems Sciences might hold a great explanation too :-O.

Thanks!


Who will teach them?


I was expecting someone mentioning this very good book. In a time where politics is dominated by populism, this should be part of the school curriculum. Reality is complex. There are often no easy or simple solutions to get a certain number up or down. Even the author of this book writes that being an expert in systems science does not give her the superpower of never being surprised by outcomes. But being able to think in systems is still a very valuable ability. A lot of humbleness and appreciation for complexity can be gained from reading this book.


It wont make any diff cause most people dont want to think. And those who feel like thinking, love to think about very different things.

As problems get more complex this is the main headache -keeping everyone in the same boat rowing in same direction.


> It wont make any diff cause most people dont want to think.

On what basis do you say this? What exactly do you mean?

I don't mind that the above claim is cynical. But I think it is (a) wildly overconfident and (b) poorly reasoned. Check your biases. Also check your pain points -- would I be crazy to guess that you've become jaded about student's ability to learn, think, and/or care about education?

Next, consider a specific scenario so we're not talking past each other. Let's say 5% of high schools decide to teach Thinking in Systems. Say they get a grant so that someone experienced (in the book and subject matter) comes in and teaches for a few weeks as a special topic (at no cost to the school).

Now, think statistically and empirically. What kinds of effects will there be on students? If you are intellectually honest, you'll have to ask questions, maybe even gather some data. If you put some effort into thinking, you probably won't conclude there will be zero effect.


It would be very useful to have an inheritance system to feed outputs of scientific theories into other theories. Just like how computer programs can access code from included libraries.

Chiara Marletto and David Deutch developped such a system, called constructor theory. It is build up from constructors, which are like little computer programs that describe what they refer to as different "tasks". And these tasks are counterfactual operations.


> How does the body react to imbalances in some internally-relevant biomarkers?

How systems regulate themselves (and also perpetuate themselves) is the question behind cybernetics, and in the organisational context, management cybernetics. The problem generalises way beyond biology. The information theoretic and control implications are fascinating.

(I’m writing a book on the subject; ping me if interested)


Yes! I think it was a parallel study between nervous systems and electrical engineering principles by Wiener et al. that kickstarted cybernetics?

Interesting! Will ping you!


If you're interested in getting an introduction to cybernetics, theres:

1. Ashby's introduction, but that's from the 60s.

Foundational, but is limited to 1st order cybernetics.

2. E. W. Udo Küppers, A Transdisciplinary Introduction to the World of Cybernetics (2023): https://link.springer.com/book/10.1007/978-3-658-42117-5

Provided a broad picture and basic intro to the history, central concepts, and various involved thinkers involved.

3. Stuart Umpleby's 6 hour lectures on the Fundamentals and History of Cybernetics (2006): https://www.youtube.com/playlist?list=PLB81F4FC0EDC4DECC

Provides a historical overview and introduction to applications of cybernetics in social sciences.

Crucially, Umpleby's history mentions the exact reason why Cybernetics never quite took off in America (well, aside from the fact that it was just difficult to fit into a disciplinary box). Heinz von Foerster, head of the Biological Computer Laboratory, did not want to provide a b.s. military justification for Dept. of Defense funding after the Mansfield Amendment -- (which, was passed in response to student protests of divestment from the Vietnam War).


Huge thanks for the list! Interesting to wonder just how influential or generally just of what nature cybernetics would be today without funding issues, lack of categorisability and the field slowly drifting away from the original vision Wiener had.


There is an approach that I like and I think allows one to compress quite a few different description models from different domains. Take a look at this image from wikipedia: https://commons.wikimedia.org/wiki/File:Systematik-Philosoph... You see "Theoretische" and "Praktische" philosophy in the rectangle with a diagonal which probably could mean that you can sort something based on the relation between matter and information, where something is more practical and less theoretical (say 30% practice and 70% theory) add a stimulus-response diagram at the bottom and various information-material circuits inside that allow you to show how something is transformed.

There is an development of this idea from some author which leads to something like https://imgur.com/a/AmF7AJe and currently the author tries to find the connection between syllogistics, Boolean algebra, Euler-Venn diagrams, and more. You can take a look at https://www.youtube.com/@Syllogist/videos Many of the recent videos have English subtitles It's hard to describe the whole idea at once, but maybe someone will have the courage to learn something from it.


This is the problem I have with any study of "systems". It all just feels like overly ornate verbiage for "hey, this thing kinda looks like this other thing if you squint". It's very difficult to distinguish that imgur image from complete Time-Cube-level quackery. "Volume (more than 3 points) <==> deep cause-and-effect relationships <==> trivalent linguistics (he gives the book to his brother)" ... really? Is this really helping humanity understand anything?


In some ways it’s my fault that I didn’t present it very well, but perhaps this should be taken less strictly, as something of a source of inspiration, like some kind of painting. Someone can then take it more strictly and make some kind of Curry-Howard isomorphism, or from the idea that the information concept of a file corresponds to the material concept of force, something in dimensional analysis can grow.

That is, it may be less formal at the beginning, but then a craftsman may be found who will do everything quite strictly.

Nevertheless, now this author is developing the theory of syllogistics and there are quite practical results, for example, that the logic textbooks for lawyers can be translated into boolean algebra and made more precise.


my philosophical take is this all revolves around what constitues "one system"

on one level, something is a subsystem of a larger supersystem

on another, it's all the one and only system. but why wouldn't the components be systems in their own right?

and sure, it's all about the 'appropiate' level of abstraction. but my point is that any "science of systems" must give a working theory of levels; or at least say something on how to grapple with this. it's not sufficient to leave it as "that aspect is an art"


Every book I’ve read on systems pretty much opens with exactly this point. It is merely the questions you want to ask, or otherwise your scope of meaningful control, that determines what is a suitable definition (or delineation) of “the system of inquiry.”

If that seems confusing, circular, or unsatisfactory, consider reading the work of the Pragmatists for the eye-opening revelation that this is what your brain is doing 100% of the time to 100% of your sensory input in order to make any sense of anything whatsoever.

Your perception is intrinsically linked to what you can do with that perceptual data. You separate a system from its components the same way you separate a rock from dirt, one piece of dirt from another, or soil from a tree: you do it based on what’s useful to you in the moment.


Seek we must, but will we find something?

Complexity science is the major remaining terra-incognita of our era. The allure of seeking to break into such a genuinely new domain is strong. The intellectual (and not only) rewards would obviously be beyond compare. The dimensional extremes probed by "standard" micro and macro physics are increasingly into diminishing returns. Thousands of people, gargantuan budgets and devices etc. but in a macro sense, rather disappointing progress: Our mental toolkit and understanding of the world in 2024 is not that different from that the Einstein/Bohr era circa 1924. It has been an era of fleshing out the details, magnificent and more productive than any previous period of history, but it saturated without turning things upside-down.

All the while, complexity is all around us, even inside us. Mysterious, defying attempts at description, let alone explanation. One can setup experiments for next to nothing. Complexity is very "accessible". So why is it still a sketch of field rather than an actual field?

If the constraints are not external then they must be internal (cognitive limitations, blind spots etc). For sure we lack mathematical tools. But maybe we lack even an adequate set of relevant pre-mathematical notions, these vague but powerful concepts that precede the sharper analytical tools and elaborate equations.

One think is for sure: Very smart people have tried very hard and if you are going to see further you must (at least) climb on their shoulders :-)


Great article :). You definitely will be going places.

  why everything exists in a holonic sense i.e. a "whole" in its own is composed of many wholes themselves, and goes on to partake in a bigger "whole".
You’re gonna love philosophy!! This is covered most definitively and scientifically by Hegel’s Science of Logic, but that’s like super advanced high level philosophy so you might not want to start there lol. Either way best of luck, I totally agree with your general thesis! You’ll be happy to know that, in general, this has already been solved by Kant, Hegel’s idol - even tho people have forgotten in the meantime.

http://www.autodidactproject.org/quote/kant_CPR_architectoni...

http://staffweb.hkbu.edu.hk/ppp/ksp1/toc.html

I’m writing a book on all this atm, so feel free to hmu anytime if you want someone to bounce ideas off of! It’s a profitable time to be a philosopher of science, that’s for sure


The Open University runs a Master's program in systems thinking (which I'm currently studying). There's a free primer course called 'Mastering systems Thinking in Practice' that gives a good overview and is full of references for further reading in the field: https://www.open.edu/openlearn/science-maths-technology/mast...

There's also a developing community at https://www.systemsinnovation.network/, where there are also many (subscription) resources.

The articles, books, and guides available (free) at https://thesystemsthinker.com/ are also worth a look. This mostly pertains to system dynamics rather than any other traditions, but it's a great resource for understanding complexity.


Norbert Wiener had this all figured out 70 years ago - everything old is new again!


I would recommend reading:

Complexity by M. Waldrop https://commoncog.com/learning-from-waldrop-complexity/

The Systems Bible by J. Gall (This one is an odd one but it is good for developing a sense of humor) https://novelinvestor.com/notes/systemantics-how-systems-wor...


This is a wonderful post and articulates something I've also struggled with over the years - and I'm much older than 16!

To me, it can be distilled down to human decision making... how flawed we are in terms of cause/effect, how logical fallacies are so powerful. Ultimately we discovered that the Scientific Method was a tool for us to overcome these flaws. Hopefully there is a similar tool out there in the ether which can help us to navigate complexity with similar confidence.


Thanks a lot!

I think that tool might be just any general framework that gives a better idea of what any small microscopic actions might lead to at a high-level. We definitely cannot qua(nt/l)ify how actions seep through given free-will and a general fringe-ness to us. We can still at least identify most probably scenarios without much difficulty. I believe behavioral economics or even epistemic studies do a good job of identifying general trajectories, by virtue of a fairly high sense of reducibility given human behavior and the benefit of hindsight respectively.

Indeed the human mind by itself isn't really trained to think beyond maybe one or two orders of implications that actions hold. Hindsight and somehow modelling agentic behavior by understanding incentives might do the trick.

Thanks again!


Great post, you're clearly on the right track. I totally agree that there is a major gap in modern theoretical understanding of how and why complex systems emerge. Breakthroughs in understanding the physical/informational processes that underlie complex adaptive systems could be immensely useful.

I'll add a word of caution though. I'm most familiar with systems theory applied to biology. Biology is, in my opinion, the pinnacle of complexity. However, it's less well acknowledged that it's also very, very complicated. This is important because it means that we have very incomplete knowledge of the base components of any biological system. Like we still don't really know the basic biochemical function of most proteins. Hell, we only just got a partial view of what most proteins even look like (in isolation) via AlphaFold. Measuring the number of all of the proteins in a single cell is effectively impossible with current and near future technology. Any feasible solution for this would probably be destructive, meaning that true time-series measurements are also impossible. These details of what we know and what we can (or can't) observe matter quite a lot, not only because they are the sort of raw matter of a systems theory, but also because they are the levers that we have to use to manipulate the system. There are only about 1000 proteins that we know how to reliably bind molecules to. There are (probably, we're not sure) more than 50k different proteins, if we include isoforms. So, all that to say, we have very incomplete knowledge of biology and very incomplete control of cellular behavior.

This isn't meant to discourage you! Instead, I think there's a tremendous opportunity for systems theory to be really useful (especially in biology) if it becomes a practical, routine analysis like statistics. But, for that to happen, we have to keep in mind the limitations and specific details of the system we're dealing with.


Thanks a lot for your kind words!

Really like your thoughts!

Indeed, lack of time-series observability makes it harder for us to find general patterns or causal events.

Definitely agree that biology is the pinnacle of all complexity - IMO something like macroeconomics or human behavior within set systems (society, politics, etc.) is fairly reducible to a very small and finite set of incentives that agents optimise for (food, shelter, status, acceptance, etc.).

Given this, Non-linearity and stochasticness still adds up to a general nature of non-determinism for the entire system.

With Biology on the other hand is extremely more complicated to study as - correct me if I'm wrong - it's still hard to realise what agents in systems are optimising for. reduction of free energy? reproduction? general homeostasis? etc. and then all these play varying roles in diff contexts, and then we'll still have to figure out how/why self-assembly and "wholes" emerging from smaller "wholes" (... ad infinitum) actually happens.

Really fuzzy thoughts but I believe There is some merit in exploring reducibility and observability from a time series perspective while considering effects of synchronity/asynchronity of observability and later how much we can desirably steer systems. Really fuzzy but I hope to work on this a bit more.

Thanks a lot for your very interesting comments! Not discouraged at all, love your view on systems theory being a "routine analysis" like statistics, i.e. a very generally applicable layer or meta-science that's an entirely new way to see things, which I should've articulated better in my post.


Interesting stuff!

I'm mostly thinking individual cells in a multicellular organism (i.e. lung cells in a person). It is indeed very hard to understand what they are optimizing for. Obviously, the organism as a whole is under selective pressure, but I'm not sure how much an individual cell in a given organism actually "feels" the pressure. Like, they undergo many cell cycles during one organism's life, but they're not really evolving or being selected during each cell cycle. Of course, this isn't always true as tumors definitely display selective pressure and evolution. But for normal tissue, I prefer to think of cells as dynamical systems operating under energetic and mass flux constraints. They're also constrained by the architecture of the interactions of the genes and proteins in the cell. All that adds up to something that looks a lot like evolutionarily optimized phenotypes, but I think that might be a bit deceptive, as the underlying process is different. It's not at all clear to me though. You're really getting at some deep questions! You might find this paper interesting in that regard:

https://www.nature.com/articles/nmeth.3254

Regarding reducibility and observability of time series, you might also find work from James (Jim) Sethna's lab at Cornell interesting. The math can be a bit hairy, but I think they do a pretty good job at distilling the concepts down so that they're intuitive. The overall idea is that some complex systems have "sloppiness", like some parts of the system can have any kind of weird, noisy behavior, but they don't change the overall behavior that much. Other parts of the system are "rigid", in that their behavior is tightly connected to the overall behavior.

https://arxiv.org/abs/2111.07176v1

You ought to get yourself connected with some folks at the Santa Fe Institute, if you haven't already. I know one affiliated professor, let me know if you want an introduction. At the very least, if you like podcasts, check theirs out. It's called "Complexity" and it's quite good.


Thank you so much for the link to those two papers. I'll try and go through them.

>Like, they undergo many cell cycles during one organism's life, but they're not really evolving or being selected during each cell cycle.

This is a really interesting perspective.

>You ought to get yourself connected with some folks at the Santa Fe Institute, if you haven't already. I know one affiliated professor, let me know if you want an introduction.

I have read a few posts from SFI faculty and seen some video lectures of Krakauer and others, but as you said, I should get in touch to some degree.

You're very kind and I really appreciate you offering to intro me! I would really love that!

Would you mind if I follow up on this via e-mail? Can I send one to the address mentioned on your Vanderbilt department page?

Thanks a lot!


Yep email me!


TLDR; we need more research into game theory

I totally agree; it all boils down to math. Linear algebra formed the foundation of a lot of what we have achieved today, including computer simulations and AI, but now society is demanding problems that aren't based in linear algebra but in game theory, as the author describes. So we need to study game theory, that's what the next period of accelerated advancement will be based on


This is an incredible perspective coming from a 16 y/o.

For what it’s worth, it took me spending 12+ years studying biochem and adjacent topics at university, to reach a very similar perspective.

The one criticism I’d make here (and tbh it’s unfair to expect more from the author) is that there has been a lot of work done towards this already. There are many systems biology textbooks, a much greater number of systems papers, and even entire journals on the subject. So I would reframe the observations slightly: there is a lot of prior work, and we need to double down on it and cross-pollinate it more.


Thank you so much for the kind words! I'm definitely a little ignorant with respect to systems biology efforts. I have an okay-ish idea of the work of someone like Uri Alon, it's definitely not enough. What resource would you recommend for me to jump into? Uri Alon's course? I believe I lack many pre-requisites to take it.


To be honest I'm not really sure what advice to give you. You're probably in a spot where your knowledge/perspective is advanced enough to make you impatient around the fundamentals, and at the same time I'm not sure that you have the background required to take full advantage of more advanced materials. You don't seem to be where most 16 year olds are, is what I'm trying to say.

This is especially tricky in the field of bio / biochem, which feels like it requires memorizing a million facts before one gets to do any real thinking and reasoning. This unfortunately tends to filter out a lot of more mathematically-minded people who are stimulated by puzzles, which I think is a shame.

For what it's worth, I really enjoyed this textbook. I'm not sure you will find in it what you're hoping to find, but I hope it sparks even more curiosity; it's fairly advanced, probably more targeted at late-BSc or MSc/PhD students: https://www.amazon.com/First-Course-Systems-Biology/dp/08153...


>This is especially tricky in the field of bio / biochem, which feels like it requires memorizing a million facts before one gets to do any real thinking and reasoning. This unfortunately tends to filter out a lot of more mathematically-minded people who are stimulated by puzzles, which I think is a shame.

yes, this is exactly the part that makes it really really hard...

thanks a lot for the recommendation!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: