With complexity comes additional explanatory power. Something with perfect explanatory power has infinite complexity. But the tradeoff is not linear in most problem domains; we can first add those concepts to our model that maximize the explanatory power relative to the complexity they introduce.
And much of the world, physical and social, can be explained in fairly simple models, which is excellent. For things that are less well captured with simple models, or for which precision is so important we wish to shrink the error term further, then great, pile on more complexity progressively.
I think the same is true of simulation and modeling: it requires a domain expert to include the useful stuff in your model while skirting or abstracting away the extraneous bits.
That's the point!
Knowing what is big and what is small is more important than being able to solve partial differential equations.
The trick is that the map itself is 1:1 but your moving around in the map is not constrained by the same limitations as in the real world.
Whoever came up with that saying just wasn't very imaginative.
A model is always an incomplete simulation of something, keeping some relevant aspect but discarding less relevant ones while remaining useful.
Suppose one day we manage to create brain uploads. They're not brains but would have additional or different capabilities despite being 1:1 maps - being possible to restore the original from them.
This is because they could be ran on different hardware.
Likewise, consider lossless compression. It is an accurate model of the data in shorter form.
Or just call it a simplification instead of a model. It's a subset of all models. Alternatively, an explanation of it captures all the detail.
Model is combined with another interesting word, essence as in essential, which is what it's supposed to capture, for given purpose.
If the purpose of the model is explanatory power and accuracy, these two do not necessarily form a one dimensional trade-off.
That's because language is mostly useful for communication between people, here and now. Treating a natural language as a construct to refine until it reaches a mathematical beauty where nice and short words mean what they "should mean" often causes so much grief and wasted energy.
A braindump would keep some aspects, like the neuron configuration etc, but discard others, like vascular properties, exact positional information of neurons etc. You could even go as deep as exact positioning of all contained atoms in the original brain.
This assertion would only be justified if we knew enough to create complex models that we do not understand. The reality is that we don't know enough yet to create any sort of working model of the whole human brain and its thinking processes.
I am more or less sold on (possibly deterministic) chaotic models for things like tinnitus.
That's close enough to "a complex model we cannot understand" for me.
To me, this sentence seems to mean that it may be possible some time down the road to “know enough” and create a complex model of the human brain that’s understandable. There are also some in philosophy who argue that humans may be cognitively closed off to understanding things like the human brain . They note that humans have biological limits to understanding certain things just like an animal might. For instance, an ant, may have biological limits to its understanding of how certain things work.
What do you think of philosophers who argue of cognitive closure?
As it happens, I think it is probable that such a model is possible in principle, and also that a complete model would not be understandable (in which case, Bonnini's paradox would apply, but it is not the reason why we currently don't have such a model).
With regard to cognitive closure, I think the first sentence of the article you link to is trivially true, but I have yet to find a good argument against physicalism in matters of the mind, and philosophers cannot agree among themselves. I think it is interesting how little attention is said about biology in these arguments (Paul Churchland is one exception.)
And maybe we understand it. Worse, we think we do.
Newton didn't have to have a complete and thorough understanding of how physics actually work, the model is not an intentional simplification but subtly wrong because noone know better at the time; however, the Newtonian model still is really helpful in comprehension of the problem domain so we still teach it to students and use it in practice.
For simple systems, obviously this is not as much of a problem, or not a problem at all.
To take a simple example, if you have a high-school wagon-rolling-down-a-slope Newton Physic exercise, then you can capture in your mind everything which explains the problem. But then take https://youtu.be/a3jfyJ9JVeM , it's still Newton, but so many constrains that we cannot picture in our mind everything and we are not even solving the problem with a close form equation anymore. But we have modeled a simulation which enable us to make experiments and build locomotion algorithm. Actually, you don't even need to understand Newton or Physics to implement that paper, you can just see it as optimizing a black box.
In the future I could see that happen at larger scale. For eg we put everything we known about the brain or cells in a simulation, we can't picture what's really going on because the system is too complex, yet we can run simulations and optimize models to find for eg new cures, treatments, etc.
1- The question of access:
A theorical exhaustive map of the brain would be easier to access than an actual brain.
A map is not a piece of paper anymore. A 1:1 map of Earth doesn't seem absurd, as long as we can navigate it with levels of details
“What is simple is always wrong. What is not is unusable”
Explanatory power is limited by explanation of errors.
If your model is simple enough to represent the original system accurately, then the original system is not complex.
My model can be complex enough to represent the original system and still be useful. I can iterate on design and make measurements without building the real thing.
No, this is just plain wrong, sorry.
I can describe a population fairly well using one or two ODEs. That doesn't mean that a population of individuals is simple. It's incredibly complex. However, at the meso-scale of just looking at population size and growth rate, my model is accurate (enough).
A model doesn't describe a system at all levels. It simulates the behaviours, whether explicit in the system or emergent from other properties, in order to describe its behaviour at a certain level.
This does not mean that your model is not complex. It means that the system is simple.
At least, it is simple enough to be represented fully in a model. I think this paradox refers mostly to systems that are too complex to be represented in any model at full accuracy, and must therefore be simplified.
> I think the George Box aphorism linked at the bottom ("All models are wrong [but some are useful]") is closer to the right way to think about this.
Also, see the original article:
> Bonini's Paradox, named after Stanford business professor Charles Bonini, explains the difficulty in constructing models or simulations that fully capture the workings of complex systems (such as the human brain).
A model is "too complex" by this definition if there can be no complete and accurate virtual representation of it, it can't be wrong by definition. If you emulate a CHIP-8, for instance, you are able to accurately represent the full state of the system inside the emulator; therefore, the emulator is capable of full accuracy. It's impossible to do the same with a human brain, as quoted in the article.