Hacker News new | past | comments | ask | show | jobs | submit login
Bonini's Paradox (wikipedia.org)
88 points by hhs on May 26, 2019 | hide | past | favorite | 52 comments



I think the George Box aphorism linked at the bottom ("All models are wrong [but some are useful]") is closer to the right way to think about this.

With complexity comes additional explanatory power. Something with perfect explanatory power has infinite complexity. But the tradeoff is not linear in most problem domains; we can first add those concepts to our model that maximize the explanatory power relative to the complexity they introduce.

And much of the world, physical and social, can be explained in fairly simple models, which is excellent. For things that are less well captured with simple models, or for which precision is so important we wish to shrink the error term further, then great, pile on more complexity progressively.


“a model that completely describes the system is about as useful as a map at 1:1 scale” — a quote from my advisor in grad school.


One of my profs was fond of the saying "with enough money, anyone can design a bridge that will stand up. Only an engineer can design a bridge that will just stay up".

I think the same is true of simulation and modeling: it requires a domain expert to include the useful stuff in your model while skirting or abstracting away the extraneous bits.


The second year civil engineers at my university had a project where they had to design a bridge that could support 2 people, but would break under the weight of 3, to teach that exact lesson.


People vary in weight by more than 50%. I don't see that the spec is possible.


The three people are known to you in advance (it’s your group). It was common to game this a little by saving the biggest person as the third one that had to break the bridge.


Woosh..

That's the point!


To quote Ulam

Knowing what is big and what is small is more important than being able to solve partial differential equations.


Well a 1:1 map is super useful. This is another use-case of VR. You can visit appartments or wander the World in Google Earth. I also used this recently for renovation work, my wife created the model of how the studio would look in the future, I imported it in Unity and we visited it in VR before starting the groundwork.

The trick is that the map itself is 1:1 but your moving around in the map is not constrained by the same limitations as in the real world.


Arguably the trick is it's not a physical map, which isn't a detail that would have needed to be specified when the saying first came into use.


Even a physical 1:1 map can have utility, since it ostensibly has more portability. That is, as long as it's a map and not a 3d model. Even then, though, a 3d model can have uses, such as tailoring when the customer cannot physically be there. That is, as long as it's a 3d model and not a clone. Then again, a clone has their uses...

Whoever came up with that saying just wasn't very imaginative.


Physical 1:1 maps are still useful. They have numerous military applications, and in general allow you to become familiar with an area without having to physically be in that area, if being there or getting there is difficult for any reason.


Do you have any example of a Physical 1:1 map ? I truly believed that such a thing is (physically) impossible.


Maybe Exercise Tiger[0] w which was a rehearsal for the D-Day Normandy landings, where they created a replica of the beach landscape that would be encountered. Or there's Sindh Kalay[1] which is an Afghan village created in Norfolk, UK.

0. https://en.wikipedia.org/wiki/Exercise_Tiger

1. https://www.independent.co.uk/news/uk/this-britain/welcome-t...


I would imagine soldiers train for urban combat in actual houses or whatever landscape they need.


I would also say, that a "complete model" is not a model anymore, but a copy of the thing itself.

A model is always an incomplete simulation of something, keeping some relevant aspect but discarding less relevant ones while remaining useful.


But that's completely untrue when the implementation does not match.

Suppose one day we manage to create brain uploads. They're not brains but would have additional or different capabilities despite being 1:1 maps - being possible to restore the original from them. This is because they could be ran on different hardware.

Likewise, consider lossless compression. It is an accurate model of the data in shorter form.


This is a good example where overloading the meaning of a word decreases its usefulness. There might be a better word for "an effective copy of the same underlying thing but with a different substrate/implementation"; perhaps "homomorphisms", perhaps make up a new word.


We have the word already in art, it's a sketch or diorama - something known to not be an accurate representation.

Or just call it a simplification instead of a model. It's a subset of all models. Alternatively, an explanation of it captures all the detail.

Model is combined with another interesting word, essence as in essential, which is what it's supposed to capture, for given purpose.

If the purpose of the model is explanatory power and accuracy, these two do not necessarily form a one dimensional trade-off.


I think it would be easier to just accept that "model" already means "simplified description" (source https://en.oxforddictionaries.com/definition/model) and use another word to refer to the superset you're mentioning.

That's because language is mostly useful for communication between people, here and now. Treating a natural language as a construct to refine until it reaches a mathematical beauty where nice and short words mean what they "should mean" often causes so much grief and wasted energy.


Implementation is an aspect though, possibly the non-relevant one for your usecase.

A braindump would keep some aspects, like the neuron configuration etc, but discard others, like vascular properties, exact positional information of neurons etc. You could even go as deep as exact positioning of all contained atoms in the original brain.


> This paradox may be used by researchers to explain why complete models of the human brain and thinking processes have not been created and will undoubtedly remain difficult for years to come.

This assertion would only be justified if we knew enough to create complex models that we do not understand. The reality is that we don't know enough yet to create any sort of working model of the whole human brain and its thinking processes.


Essential nonlinearities are rife in the little of neurophysiology that we've encountered.

I am more or less sold on (possibly deterministic) chaotic models for things like tinnitus.

That's close enough to "a complex model we cannot understand" for me.


My point is not that Bonini's paradox is false; it is that it is being used incorrectly, in the quoted passage, to 'explain' something that has a different explanation. The paradox itself seems to me to be almost certainly true, but rather trivially so, without adding much at all to our understanding of complex systems. There are a lot of incomplete models that are useful and add to our understanding of the thing being modeled.


Interesting view. You write: “The reality is that we don't know enough yet to create any sort of working model of the whole human brain and its thinking processes.”

To me, this sentence seems to mean that it may be possible some time down the road to “know enough” and create a complex model of the human brain that’s understandable. There are also some in philosophy who argue that humans may be cognitively closed off to understanding things like the human brain [0]. They note that humans have biological limits to understanding certain things just like an animal might. For instance, an ant, may have biological limits to its understanding of how certain things work.

What do you think of philosophers who argue of cognitive closure?

[0]: https://en.m.wikipedia.org/wiki/Cognitive_closure_(philosoph....


I actually phrased that statement that way to avoid and speculation about what might be possible, as it would be a distraction from the point I am making.

As it happens, I think it is probable that such a model is possible in principle, and also that a complete model would not be understandable (in which case, Bonnini's paradox would apply, but it is not the reason why we currently don't have such a model).

With regard to cognitive closure, I think the first sentence of the article you link to is trivially true, but I have yet to find a good argument against physicalism in matters of the mind, and philosophers cannot agree among themselves. I think it is interesting how little attention is said about biology in these arguments (Paul Churchland is one exception.)


It's possible and extraordinarily unlikely. I think any philosopher who argues that it's _likely_ humans have a biological limitation preventing us from ever understanding the human brain is going to be making some obvious errors. Got any examples to check?


We may actually get something similar, perhaps even empirically.

And maybe we understand it. Worse, we think we do.


I think that's just a restatement of the point. In order to understand, and therefore create a model, you need to have a complete and thorough understanding of the original, so in terms of comprehension of the problem domain the model doesn't help much.


How about one of the classic examples of 'wrong but useful' models that is Newtonian physics / 'billiard ball' mechanics?

Newton didn't have to have a complete and thorough understanding of how physics actually work, the model is not an intentional simplification but subtly wrong because noone know better at the time; however, the Newtonian model still is really helpful in comprehension of the problem domain so we still teach it to students and use it in practice.


The discussion is about systems that are too complex for us to understand in their original form. For such highly complex systems, accurate models also sometimes have to be so complex we can't understand them either.

For simple systems, obviously this is not as much of a problem, or not a problem at all.


My point is that the quoted passage claims, as a fact, something that is merely a hypothesis, and not a well-supported one at that. For example, we have useful models of the weather and climate, despite the intractability of fluid dynamics.


We don't necessarily create complex simulations to understand them, but to have those simulation available to then build for e.g reinforcement learning models.

To take a simple example, if you have a high-school wagon-rolling-down-a-slope Newton Physic exercise, then you can capture in your mind everything which explains the problem. But then take https://youtu.be/a3jfyJ9JVeM , it's still Newton, but so many constrains that we cannot picture in our mind everything and we are not even solving the problem with a close form equation anymore. But we have modeled a simulation which enable us to make experiments and build locomotion algorithm. Actually, you don't even need to understand Newton or Physics to implement that paper, you can just see it as optimizing a black box.

In the future I could see that happen at larger scale. For eg we put everything we known about the brain or cells in a simulation, we can't picture what's really going on because the system is too complex, yet we can run simulations and optimize models to find for eg new cures, treatments, etc.


A realistic model is hard for human to understand, but it didn't account for the case that human can leverage computing power to simulate and machine learn whatever is "realistic".


This is an interesting point - if I create a fully accurate model that allows me to quickly simulate the outcome of any input, then that becomes an oracle that I can query to better understand the problem. But this then leads me to think that my understanding would then be a simpler mental model of what the simulation does, which brings me back to the original paradox.


Abstraction, i.e. modelling, means leaving out details. Understanding then becomes knowing what details you can leave out for the given purpose. In this context, it is essential to remark that scientific abstractions, i.e. physical-world, empirical theories, need to be tested experimentally. Otherwise, they will simply lack legitimacy.


Two points not mentionned:

1- The question of access: A theorical exhaustive map of the brain would be easier to access than an actual brain.

2- Tools: A map is not a piece of paper anymore. A 1:1 map of Earth doesn't seem absurd, as long as we can navigate it with levels of details


A cycle accurate emulator is a 1:1 model of the observed behavior of the original hardware under normal operating conditions (it doesn't model probing it with an oscilloscope, or running it at the the wrong voltage or temperature etc.). It's more useful than the original hardware because its state can be saved and restored and examined, and you can set breakpoints and watchpoints, and easily interface it with other software, and it can run at different speeds to the original.


You are confusing between useful systems vs explainable systems vs understandable systems. All models, 1:1 or not are useful in some sense. You might also be able to explain some event or behaviour of the system using a model. However, understanding requires building abstractions graph that can allow an arbitrary chain of reasoning and inference. As system becomes complex, such abstraction graph might result in similar complexity resulting in inability of a human to store in memory and execute inference/reasoning over it.


The Paul Valéry statement’s translation is fairly poor. It sounds more like:

“What is simple is always wrong. What is not is unusable”


And the most important bit is left out - you have to know when and how the model is inaccurate or even the simple one is useless.

Explanatory power is limited by explanation of errors.


Exactly - this seems to be the issue with many models in the social sciences, they often seem to do equally well at explaining opposite outcomes


Having a complex model that describes a complex system is not unusable though. There is a reason we want to accurately simulate the Navier-Stokes equations with complex boundary conditions.


To quote the original article: "As a model of a complex system becomes more complete, it becomes less understandable. Alternatively, as a model grows more realistic, it also becomes just as difficult to understand as the real-world processes it represents"

If your model is simple enough to represent the original system accurately, then the original system is not complex.


If your model is simple enough to represent the original system accurately, then the original system is not complex.

My model can be complex enough to represent the original system and still be useful. I can iterate on design and make measurements without building the real thing.


> If your model is simple enough to represent the original system accurately, then the original system is not complex.

No, this is just plain wrong, sorry.

I can describe a population fairly well using one or two ODEs. That doesn't mean that a population of individuals is simple. It's incredibly complex. However, at the meso-scale of just looking at population size and growth rate, my model is accurate (enough).

A model doesn't describe a system at all levels. It simulates the behaviours, whether explicit in the system or emergent from other properties, in order to describe its behaviour at a certain level.


> My model can be complex enough to represent the original system

This does not mean that your model is not complex. It means that the system is simple.

At least, it is simple enough to be represented fully in a model. I think this paradox refers mostly to systems that are too complex to be represented in any model at full accuracy, and must therefore be simplified.


The paradox invokes an absolute. How do you even know the system is "too complex"? Maybe it is too complex for humans but not for a galaxy brain?


See the comment at https://news.ycombinator.com/item?id=20019426 that refers another quote:

> I think the George Box aphorism linked at the bottom ("All models are wrong [but some are useful]") is closer to the right way to think about this.

Also, see the original article:

> Bonini's Paradox, named after Stanford business professor Charles Bonini, explains the difficulty in constructing models or simulations that fully capture the workings of complex systems (such as the human brain).

A model is "too complex" by this definition if there can be no complete and accurate virtual representation of it, it can't be wrong by definition. If you emulate a CHIP-8, for instance, you are able to accurately represent the full state of the system inside the emulator; therefore, the emulator is capable of full accuracy. It's impossible to do the same with a human brain, as quoted in the article.


This seems to be more of a tautology than a paradox.


dang, thanks for stating the obvious professor Charles Bonini.


I keep on saying this about the sociology research. They keep on using single variables to track complex systems and they end up failing. All of the lazy social justice fields end up causing immense damage.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: