
Bonini's Paradox - hhs
https://en.wikipedia.org/wiki/Bonini%27s_paradox
======
notafraudster
I think the George Box aphorism linked at the bottom ("All models are wrong
[but some are useful]") is closer to the right way to think about this.

With complexity comes additional explanatory power. Something with perfect
explanatory power has infinite complexity. But the tradeoff is not linear in
most problem domains; we can first add those concepts to our model that
maximize the explanatory power relative to the complexity they introduce.

And much of the world, physical and social, can be explained in fairly simple
models, which is excellent. For things that are less well captured with simple
models, or for which precision is so important we wish to shrink the error
term further, then great, pile on more complexity progressively.

------
hprotagonist
“a model that completely describes the system is about as useful as a map at
1:1 scale” — a quote from my advisor in grad school.

~~~
na85
One of my profs was fond of the saying "with enough money, anyone can design a
bridge that will stand up. Only an engineer can design a bridge that will
_just_ stay up".

I think the same is true of simulation and modeling: it requires a domain
expert to include the useful stuff in your model while skirting or abstracting
away the extraneous bits.

~~~
toomanybeersies
The second year civil engineers at my university had a project where they had
to design a bridge that could support 2 people, but would break under the
weight of 3, to teach that exact lesson.

~~~
thaumasiotes
People vary in weight by more than 50%. I don't see that the spec is possible.

~~~
flashingleds
The three people are known to you in advance (it’s your group). It was common
to game this a little by saving the biggest person as the third one that had
to break the bridge.

------
mannykannot
> This paradox may be used by researchers to explain why complete models of
> the human brain and thinking processes have not been created and will
> undoubtedly remain difficult for years to come.

This assertion would only be justified if we knew enough to create complex
models that we do not understand. The reality is that we don't know enough yet
to create any sort of working model of the whole human brain and its thinking
processes.

~~~
hhs
Interesting view. You write: _“The reality is that we don 't know enough yet
to create any sort of working model of the whole human brain and its thinking
processes.”_

To me, this sentence seems to mean that it may be possible some time down the
road to “know enough” and create a complex model of the human brain that’s
understandable. There are also some in philosophy who argue that humans may be
cognitively closed off to understanding things like the human brain [0]. They
note that humans have biological limits to understanding certain things just
like an animal might. For instance, an ant, may have biological limits to its
understanding of how certain things work.

What do you think of philosophers who argue of cognitive closure?

[0]:
[https://en.m.wikipedia.org/wiki/Cognitive_closure_(philosoph...](https://en.m.wikipedia.org/wiki/Cognitive_closure_\(philosophy\)).

~~~
mannykannot
I actually phrased that statement that way to avoid and speculation about what
might be possible, as it would be a distraction from the point I am making.

As it happens, I think it is probable that such a model is possible _in
principle_ , and also that a complete model would not be understandable (in
which case, Bonnini's paradox would apply, but it is not the reason why we
currently don't have such a model).

With regard to cognitive closure, I think the first sentence of the article
you link to is trivially true, but I have yet to find a good argument against
physicalism in matters of the mind, and philosophers cannot agree among
themselves. I think it is interesting how little attention is said about
biology in these arguments (Paul Churchland is one exception.)

------
gouh
We don't necessarily create complex simulations to understand them, but to
have those simulation available to then build for e.g reinforcement learning
models.

To take a simple example, if you have a high-school wagon-rolling-down-a-slope
Newton Physic exercise, then you can capture in your mind everything which
explains the problem. But then take
[https://youtu.be/a3jfyJ9JVeM](https://youtu.be/a3jfyJ9JVeM) , it's still
Newton, but so many constrains that we cannot picture in our mind everything
and we are not even solving the problem with a close form equation anymore.
But we have modeled a simulation which enable us to make experiments and build
locomotion algorithm. Actually, you don't even need to understand Newton or
Physics to implement that paper, you can just see it as optimizing a black
box.

In the future I could see that happen at larger scale. For eg we put
everything we known about the brain or cells in a simulation, we can't picture
what's really going on because the system is too complex, yet we can run
simulations and optimize models to find for eg new cures, treatments, etc.

------
louis8799
A realistic model is hard for human to understand, but it didn't account for
the case that human can leverage computing power to simulate and machine learn
whatever is "realistic".

~~~
falcor84
This is an interesting point - if I create a fully accurate model that allows
me to quickly simulate the outcome of any input, then that becomes an oracle
that I can query to better understand the problem. But this then leads me to
think that my understanding would then be a simpler mental model of what the
simulation does, which brings me back to the original paradox.

------
neokantian
Abstraction, i.e. modelling, means leaving out details. Understanding then
becomes knowing what details you can leave out for the given purpose. In this
context, it is essential to remark that scientific abstractions, i.e.
physical-world, empirical theories, need to be tested experimentally.
Otherwise, they will simply lack legitimacy.

------
hotBacteria
Two points not mentionned:

1- The question of access: A theorical exhaustive map of the brain would be
easier to access than an actual brain.

2- Tools: A map is not a piece of paper anymore. A 1:1 map of Earth doesn't
seem absurd, as long as we can navigate it with levels of details

------
mrob
A cycle accurate emulator is a 1:1 model of the observed behavior of the
original hardware under normal operating conditions (it doesn't model probing
it with an oscilloscope, or running it at the the wrong voltage or temperature
etc.). It's more useful than the original hardware because its state can be
saved and restored and examined, and you can set breakpoints and watchpoints,
and easily interface it with other software, and it can run at different
speeds to the original.

~~~
sytelus
You are confusing between useful systems vs explainable systems vs
understandable systems. All models, 1:1 or not are _useful_ in some sense. You
might also be able to explain some event or behaviour of the system using a
model. However, understanding requires building abstractions graph that can
allow an arbitrary chain of reasoning and inference. As system becomes
complex, such abstraction graph might result in similar complexity resulting
in inability of a human to store in memory and execute inference/reasoning
over it.

------
d--b
The Paul Valéry statement’s translation is fairly poor. It sounds more like:

“What is simple is always wrong. What is not is unusable”

~~~
AstralStorm
And the most important bit is left out - you have to know when and how the
model is inaccurate or even the simple one is useless.

Explanatory power is limited by explanation of errors.

~~~
falcor84
Exactly - this seems to be the issue with many models in the social sciences,
they often seem to do equally well at explaining opposite outcomes

------
dearrifling
Having a complex model that describes a complex system is not unusable though.
There is a reason we want to accurately simulate the Navier-Stokes equations
with complex boundary conditions.

~~~
phoe-krk
To quote the original article: "As a model of a complex system becomes more
complete, it becomes less understandable. Alternatively, as a model grows more
realistic, it also becomes just as difficult to understand as the real-world
processes it represents"

If your model is simple enough to represent the original system accurately,
then the original system is not complex.

~~~
dearrifling
If your model is simple enough to represent the original system accurately,
then the original system is not complex.

My model can be complex enough to represent the original system and still be
useful. I can iterate on design and make measurements without building the
real thing.

~~~
phoe-krk
> My model can be complex enough to represent the original system

This does not mean that your model is not complex. It means that the system is
simple.

At least, it is simple enough to be represented fully in a model. I think this
paradox refers mostly to systems that are too complex to be represented in any
model at full accuracy, and must therefore be simplified.

~~~
AstralStorm
The paradox invokes an absolute. How do you even know the system is "too
complex"? Maybe it is too complex for humans but not for a galaxy brain?

~~~
phoe-krk
See the comment at
[https://news.ycombinator.com/item?id=20019426](https://news.ycombinator.com/item?id=20019426)
that refers another quote:

> I think the George Box aphorism linked at the bottom ("All models are wrong
> [but some are useful]") is closer to the right way to think about this.

Also, see the original article:

> Bonini's Paradox, named after Stanford business professor Charles Bonini,
> explains the difficulty in constructing models or simulations that fully
> capture the workings of complex systems (such as the human brain).

A model is "too complex" by this definition if there can be no complete and
accurate virtual representation of it, it can't be wrong by definition. If you
emulate a CHIP-8, for instance, you are able to accurately represent the full
state of the system inside the emulator; therefore, the emulator is capable of
full accuracy. It's impossible to do the same with a human brain, as quoted in
the article.

------
meuk
This seems to be more of a tautology than a paradox.

------
OrgNet
dang, thanks for stating the obvious professor Charles Bonini.

------
lanevorockz
I keep on saying this about the sociology research. They keep on using single
variables to track complex systems and they end up failing. All of the lazy
social justice fields end up causing immense damage.

