1 - http://robotics.cs.tamu.edu/dshell/cs689/papers/anderson72mo...
For example, if you took a Shingled Magnetic Recording hard drive filled with VP9 encoded movie files on a NTFS partition, even if you perfectly understood the physics, and figured out all the individual magnetic fields, you would still have a rough time making sense of the of what was on it without this higher level information information.
There's much work to be done, but unless something really amazing happens it'll be a long time before we get anywhere near new fundamental(there's still loads of work to be done elsewhere) physics
Actually there are a lot of predictions we can’t make today. Stuff that are very important
1) Why do most young people do fine with COVID-19 but some apparently healthy people do not?
2) How exactly do proteins fold
3) Will it rain tomorrow at location x at time x
4) when will my pet tiger decide to maul me?
5) will a drug with this molecular structure for this disease actually treat the disease?
We know a load about the basic building blocks. About quarks and protons and electrons. But once we get to more complicated systems composed of these building blocks, we struggle to make predictions.
See for example the scrubbed SpaceX launch due to weather.
I am sure NASA and spaceX have scientists who are experts in physics, but they could not predict in advance that there would be bad weather on that exact day.
What we do now is make calculations on “spherical cows” and call everything else “chaotic” and “too many variables” and pat ourselves on the back for reducing everything to its basic blocks.
Then there are those like protein folding and animal behaviour where we understand the building blocks, but even of we could track every single building block there would be something missing for us to make predictions.
I'm not a CFD guy but it's partly an issue of not having enough computing power - fluid dynamics doesn't scale the same way as regular mechanics so you have to compute the whole picture.
We've spent the last century arguing about what this means for physics, and we still have no idea - even though quantum events can have macroscopic consequences. (To give a contrived example - a decaying atom may damage some human DNA and cause cancer in a historically important individual who dies decades earlier than they would have otherwise, changing the course of history.)
It's more accurate to say that we can model a small selection of mathematically 'pure" system types that are neither random nor emergent, on timescales that are comprehensible to us.
Even the map metaphor starts to break at this point because it's hard to think of an alternative map that's not a literal map, and maps all seem the same (i.e. resolving down to the geometry-based, literal "map system" we already know) if you squint your eyes enough. The map is the territory, and it's not, and we can say the same of the pencil, and your shoes, and the concept of intuition. Just as a way of putting "known physics is the map" into context.
Your "ant" comment hits home :)
I went to college and studied physics. I learned how the behavior of many atoms leads to thermodynamics and the properties of materials. I learned how elementary quantum mechanics leads to atomic physics and chemistry. I learned how the strong force gives rise to nuclei. I learned how the Standard Model produces everything on this list. I learned how, at each layer, the correspondence was fiendishly subtle, but the work of thousands of physicists over decades had built the bridges.
At the end of it, I was still dissatisfied, because I hadn't learned what emergence was. I'd see crackpots presenting their personal theories of everything, declaring that they knew what no physicist ever could, because their model had emergence. They knew about it, and we didn't. But I still didn't know what it was.
Then I dug deeper and it became obvious. Emergence is a beautiful idea that has always been present, in some form or another, in every field of science. It is so well-developed in some fields, such as particle physics, that it has become a suite of quantitative methods with incredible accuracy. I had been learning it the entire time, and now use it every day without a second thought. Meanwhile, "emergence" is a brand name. It's a specific word passed down over generations of pop science used to convey mystique. It's a tool some charlatans use to pretend they know more than they do.
No offense meant to the article, but I often hear people say they're most excited about some substanceless theory because it "has emergence", and this is part of why.
This one's pretty good, though. https://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futili...
But I think a good summary is that emergence happens when relations between things on a finer level of analysis become the things themselves on a coarser, higher level AND that these new things also have rich and complex interactions among them, making the levels roughly equally interesting, complicated and useful. Overall the bottom level is "enough" in theory but rising up the levels allows us to gain orders of magnitude more "practicality".
Sometimes by common sense we stumble upon a mid-level concept and then when we learn about the lower level explanation, we proclaim that the original phenomenon was "emergent".
Now it's a great (but I think not very fruitful) philosophical debate to decide which level has how much "realness", is the higher level just a convenient fiction, an illusion, a practical description and there is a lowest, rock bottom level which the universe/God really "cares about", or are all levels just as real and the whole hierarchy/ladder is just our conception,and not only can rising up the abstraction be an artificial action, but maybe descending lower can also be a illusory/non-natural and a result of our search for practicality and the rock bottom isn't really that real and the universe doesn't know to care about it.
Just as in math the naive explanation is we have fundamental axioms and we derive the higher level theorems from it. But in reality what happens is we know what theorems we want (to make things interesting or empirically useful) and then look for ways to structure the axioms to give rise to the "real" theorems.
Or, what view of a signal is more real and fundamental, the time-domain or the Fourier-domain? Both can be equally real or equally fundamental, and can give rise to equally interesting Analyse. Or perhaps not, and the time domain is what the universe cares about and has in its "source code" (which is probably a misplaced metaphor though) and Fourier is just for convenience...
Other times when making the relations into thing (reifying them) and vice versa we don't really move higher or lower, we stay roughly on the same level. An example is duality in graph theory (faces to nodes, nodes to faces) or in optimization (exchanging the role of variables and equations). And at the risk of sounding wooey, this is similar to the duality, non-duality ideas in Eastern philosophy, looking at the same stuff but interpreting "the gaps" as "the things" and vice versa. And then saying neither is more real, both are real and neither are, together they are real, separately they aren't etc. Maybe the levels are not a strict hierarchy, maybe they loop back, maybe consciousness is indeed such a feedback loop across different levels (a la Hofstadter), then you get to resonances in feedback loops and if you push it far it gets quite wooey. But it's still very differently wooey than some quantum healer fixing the resonances in your liver through the TV.
Perhaps this hints at us not even understanding the lower levels of many areas well enough yet.
I think people use emergence more when it's viewed and suggested as independent from our description, it really emerges on its own and we just discover and describe this. And "abstraction" is used when we view our role greater, we invented an abstraction to easier manage the complexity. So emergence is more incidental, accidental, serendipitous and unexpected, while abstraction is intentional and purposeful and designed.
Now which is which and whether the distinction itself touches on something real or is just a different view of the same thing, is another great topic for discussion.
Hofstadter discusses this in his book I Am a Strange Loop: there's a line that's something like "sometimes the bottom level - despite being entirely responsible for the effect in question - is nonetheless totally irrelevant to it"
He went to a lot of effort to try to come up with a plausible description of what "life" would be like in such an environment. That fired my imagination also, and I wondered how we could do such a thing in a scientific manner. Given the rules of the Standard Model, in principle almost all aspects of the surface conditions could be elucidated, but I feel we would be missing almost all of the essence of it. Just like a deep understanding of Quantum Electrodynamics won't help you determine from first-principles that Earth has pretty sunsets or that your hair stands up if you take a wool jumper off.
For example, the surface of a neutron star would be very bright in gamma rays, but those are subject to pair-production in the strong magnetic field of the star, making light effectively a short range sensory mechanism. Meanwhile, the enormous density of the crust means that it transmits sound incredibly well and at enormous speeds, making acoustics the equivalent of our photon-based vision sense! But it would hemispherical at short range and 2D in a complicated way at long range. Spacetime itself would be strongly polarized, atoms would be distorted into spindles and have strongly anisotropic behaviour, general relativistic effects could be felt in everyday scenarios, and there are likely dozens of other effects the we can't even fathom by merely staring at a handful of field equations that we developed in our comparatively cold, flat spacetime.
2] The speed of sound in Neutron stars is just under 0.578 times the speed of light! https://physics.stackexchange.com/questions/54684/is-the-spe...
I'm particularly intrigued by why the range would impact the sound based based sensing? Is it because of spacetime curvature or something?
A creature living on a neutron star might use sonar like we do light, because the speed of sound in the star crust is more then half the speed of light. In many ways it would be comparable. But the crust is only below the creature, not above. So its "sound sense" would only work for one hemisphere surround the creature, the other side would be silence, its equivalent of total darkness.
Neutron stars have thin crusts, possibly mere meters thick, and almost certainly with layers of some sort. Again, sound would probably travel at different speeds in the various layers, just like sonar does in oceans, where salinity affects propagation. There would be complex effects with distance to do with this. Locally, the sound could travel outwards in a hemisphere, but at long range it would likely be more 2D.
This is all speculation, but it's based on real science. My point is that we cannot really know, and no amount of staring at equations will help paint a realistic picture.
When I learned during my computing and philosophy class about emergence it was like I found the missing puzzle pieces so everything (I mean that) makes sense.
I love how all things in existence emerge simply via self organisation. All that is needed is communication. This can be gravity for stardust to form planet systems; using once eyes to form bird flocks; using chemicals for communication to form ant colonies; using human communication to form societies.
Once during my diving lecture, the lecturer pointed out, that we can, in many cases, call self organisation simply an eco system. Which makes it much easier to explain this topic to people.
He said, a lake will self organise when you rob it from an important fish, and so will our body when we start doing sport (grow muscles). Every system will react to change, and will try to re-organse itself.
When applied to behavior I would say that all that is needed is a shared goal, a shared understanding. The individual units don't necessarily have to communicate. They only need to share some common direction or goal. For example, the infamous starlings don't formally conspire. Yet the result functions as a conspiracy, so to speak.
Something I also did not consider much was the sociology of science. It seems, at least from my outsider perspective, that the highest status scientist is a theorist who either predicts an experimental result observed decades in the future, or a theorist who synthesizes disparate observations. This value, insofar as it is true, stems from a reductionist perspective. I see reductionism being harmful to inquiry if only because it favors some scientific roles over others, theoretical over applied, theorist over experimentalist, discovery over reproducibility.
For all of its smart wording, this amounts to the same old issue Aristotle had with describing the mathematical universe in which somehow humans are different enough to warrant "higher stuff".
Is there any useful criticism of science besides pseudo-attacks always coming from antropocentric, and usually religious, corners? Can't we just do the one, final Copernican shift and frickin' move humans from the centre?
> These equations seem to govern the behaviour of all our toy experiments, with multiple superimposed world states interfering with each other when they happen to transform into the same state as each other… except when a human looks at the result, whereupon all but one of the world states just vanishes!
The first person I know of¹ to propose that the world states don't just vanish – it's just that "brain that sees event X" and "brain that sees event Y" don't converge to the same state, so you don't see quantum interference when people get involved – was Hugh Everett III. This is a simpler explanation, stops quantum mechanics contradicting special relativity, solves the EPR paradox, side-steps Bell's theorem, stops God playing dice with the universe… in short, it solves every problem² except where the Born rule comes from.³
When he proposed it to Niels Bohr, he was laughed out of the room.⁴
In principle, we could probably eliminate anthropocentrism in physics models from popular consciousness entirely… but then we wouldn't be prepared for new fields, where we'd introduce it right back again.
¹: Okay, technically Erwin Schrödinger mentioned the idea five years earlier, but he didn't do much with it. Apart from, you know, coming up with the equation in the first place…
²: Edit to add: I didn't know about Grete Hermann's flaw in John von Neumann's proof that all non-local hidden variable theories were impossible. Such theories still violate special relativity, but they're not impossible; many-worlds doesn't solve this problem because it isn't actually a problem. (Many-worlds is still the simplest theory I know of, but I'm less certain that it's the simplest possible theory consistent with the evidence… making this comment less relevant than I initially thought it was.)
³: Some people think many-worlds explains the Born rule. I haven't heard all the arguments, but all the ones I've heard have been wrong.
⁴: Artistic license. But Léon Rosenfeld certainly considered him "undescribably stupid" and unable to "understand the simplest things in quantum mechanics".
For what it is worth, I did not read it that way at all, and the fact that the author is a neuroscientist suggests (though does not prove) that it was not intended as such.
To me, the article seems consistent with the view that the universe is reducible to a fundamental physics, leaving the author puzzling over why many of us feel we have more agency than this view would seem to imply. It is a reasonable issue for anyone, not just neuroscientsts, to ponder.
This is not an "extra" claim on top of conservation laws/fundamental symmetries.
> Reductionism can be understood as a combination of (1) the claim that the intelligibility of the universe depends on the unity of scientific theories
It's strange and frankly likely just projection to say that it's the reductionists that claim the universe must be a certain way in order for it to be intelligible.
> Despite its limited usefulness as a guide to scientific practice, reductionism is a powerful cultural idea. We might call it the Lego-block conception of reality: only the Lego blocks are real, so ‘fundamental’ science involves identifying what the blocks are and how they interact, while ‘applied’ science involves discovering the right combination or permutation of blocks that accounts for the phenomenon in question.
The question of realism is separate than reductionism of fundamental law, and it's not a good sign to (deliberately?) confuse them. EDIT: Just to be clear to people skimming this stuff, I can hold two theories: a) your dog is real, b) your dog is not real, only quarks are real. We can debate this for as long as we'd like, but what I am not necessarily saying is that your dog's dogness corresponds to some suspension or modification of fundamental physics.
> that parts and wholes have ‘equal’ ontological priority, with the wholes constraining the parts just as much as the parts constrain the wholes.
Again, if ontology means "realism" this is a confusion, if it means the way things work, it's simply wrong or completely unsupported.
 For example, putting emergentism and reductionism on two ends of a spectrum is not strictly correct. There are reductionist emergent theories out there (in both phil. of science, as well as phil. of mind).
Usually when people make a deal about emergence they’re confusing the map for the territory. They couldn’t predict the overall state evolution just by considering the parts.
They’re thinking of their notion of the generic entity type. But all that’s out there in the territory are the entity instances with their specific state.
The parts can’t exist in a stateless fashion. It’s just that in our heads — the map — we can consider them in that fashion.
One such reference might be "Strong and Weak Emergence" by David Chalmers , given the fact that "weak" emergence was introduced but "strong" emergence was not even mentioned (though indirectly implied).
As a consequence, this new entity has to function as a causal agent. Otherwise, why populate the world with entities that don't work as causal agents. Upward causation is consistent with both ontological and epistemic emergence. Downward causation is consistent with epistemic emergence. Are there examples of ontological emergence that can play a role in downward causation?
Until then, it is all unbounded philosophical speculation. These speculations get bounded (constrained) by empirical sciences.
This endless subdivision is a pass time.
It is just another way of saying that things look different from different "zoom levels", hence we assign different models and/or meanings to them at each different level. But that's just our very human perspective, it is not some pre-existing/universal truth of a system.
You choose, somewhat arbitrarily, a particular level of reductionism to apply. (It seems from what you are saying that you would agree that) atoms are real, but bricks are not; bricks are what we call a clump of iron, oxygen, silicon, aluminum, etc. which is mostly stuck together in a particular shape and size. Bricks are based on a human perception of togetherness and objecthood that is completely arbitrary; there's no real delineation of the edge of a brick, nor is there a real definition of 'brick' in the first place.
All of this is (more-or-less) true. Yet, at the same time, not only can I move the slide to a place where the brick is very real (say, when it's smashing out your brains after being wrapped in a slice of lemon), I can also move it to a place where atoms are no less emergent than bricks. 'Atom' is just the name we give to a cluster of quarks in various configurations, with an electron (kind of, arguably) nearby.
Yet, atoms are a construct present in physics because they're useful in making predictions about physical phenomena. Nothing fundamental about it. And bricks are a construct in engineering and architecture because they have a certain functional and aesthetic purpose in each of those disciplines.
It's all always arbitrary. It's all always human understanding. Science is human understanding. The question is not 'is this just an artifact of human perception', the question is 'is this relevant to the questions I am asking'.
Depending on which variant of emergentism people are talking about at different times, they could be talking about something that's fully consistent with the reductionistic program, something that's not making new claims but is just about levels of description, something that's vague but not obviously wrong, or some profound revelation proposing we overturn everything we thought we knew about physics or some natural phenomena or anything in between. The stakes are either utterly trivial, or as profound as they could possibly get. And depending on who you talk to about emergence at any given moment, you'll get a confident answer that could potentially fall anywhere on that spectrum, or worse, a strange kind of passiveness where this issue is kind of handwaved away.
If we only understand the the physical laws which can be represented as equations that only equates the known ones, so physics >= math.
We can also create math that has no known relation to physical laws, so math > physics.
Round and round and round it goes where it stops nobody knows...
"Emergence: The Connected Lives of Ants, Brains, Cities, and Software"
An utterly phenomenal book:
(It earned a place on my shelf next to "Godel Escher Bach", by Hofstadter, and "Society of Mind", by Minsky.)