In the end, the problem with the claim is that if it is true then 'computer' is defined so loosely as to be uninformative; as soon as you add plausible conditions, it seems obviously false.
According to the best physics, the universe: isnt programmable; doesn't evolve deterministically; isnt described by functions over integers; isn't electronic; doesnt transmit power through programmable operation; isn't abstract; doesnt have causal powers through mere arrangements of parts; .. and so on.
Whatever it could mean, "computer" here is a strange term. I've no idea why people are so keen on it.
I take it to be a sort of humean-idealist-passivist theory: the world is an abstract set of discrete states that are just mere arrangements with no necessary connections; there is only pattern in these discrete states; these patterns are abstract and can be realised by mere number alone.
This is roughly somehting like an early 20th C. logicial positivist view, which was somewhat influential -- but it's wholey false, and throw out in scientific practice
The article clearly states that it’s using our colloquial meaning of “computer” as a metaphor to help convey their thought experiment. It even has a disclaimer about using metaphors.
FWIW I think it’s an interesting thought experiment. I think it’s especially interesting to draw the parallel to biology. Clearly our brains are doing computation by even the strictest definition of computer, so at which biological level does computation stop? A chunk of my brain is clearly doing computation, a neuron must be doing computation since that is the building block for our brains. Single cells must also be doing computation since a single cell had all of the computational knowledge encoded in it to build me. All of these processes are built upon physical reality, so it’s not that big a leap that similar processes might emerge elsewhere and at different scales and using different physical mechanisms.
I don't regard computation as a physical process, or a property of any kind. It's a description of a system, say like "salad" or "party", which whilst informative in conversation, doesn't map to any actual property of reality.
ie., there is no property shared by all salads, nor by all computers.
Saying, "everything is a computer" is a bit presocratic in its way, like "everything is water" or "everything is fire"
In practice, what we mean by "computer" is something which can transmit power in a programmable fashion.. this has more to do with our ability to control devices, and what those devices are, than anything in the "computer".
It turns out LCDs, keyboards, CPUs, etc. can all be joined up so I can do something with them.. I call this a "computer" and leave it there.
As far as the mathematical definition goes, all functions from ints->ints are computable.. this is eitehr useless or uniformative. It has no relevance for physics.
How is it not electric?? You have trillions of fields and particle’s everywhere. How’s it not programmable? It’s computing new structures with gases at any moment.
You can think of the universe that way, but is it meaningful to say it's computing new gas structures? Maybe for some physicists. Metaphysically speaking, I don't think it makes sense for the universe to be a computer, unless we're inside a simulation. What would that even mean?
The electric part seems an arbitrary limitation. The original computers ran off food, which okay technically is chemistry which technically is just a minor application of coulomb's law, but ultimately you can use stones in the desert or water flowing through pipes to compute. Of course all this is still based off chemistry, but you can imagine computing with gravity or nuclear forces.
I think you missed the point - this is what they mean by "defined so loosely as to be uninformative." By saying "in some sense" you are allowing arbitrarily abstract definitions of the term "computer" thereby making the term itself no longer useful.
perhaps the problem is you're trying to fit computer in the wrong level of abstraction or projecting your position wrt to it incorrectly.
E.g. a computer may or may not be programmable for processes that live inside the computer.
This said, the compute abstraction is a logical gadget. Logical in the sense that it aims for consistency at a mininmum.
A logical gadget is a computer, by Curry-Howard correspondence.
This is a broad enough framework to explain why what you're debating against is the case: given that philosophy bridges language to logic and provies the framework for all sciences, seeing nature as compute is like saying there's a logic to nature.
So yes, everything is compute. Everything more detailed that that, turing machines, digits, whether it's it from bit or not is a separate matter of debate that may or may not be required.
The utility is relative to the purpose. If everything is compute that you don't know how to program, and that's all you want to do - sure that's useless.
If you what you want is a powerful analysis framework, that's gold.
If you know how to abstract your way forward with that framework that too.
So arguing that the distinction is useless because it's universal, is like saying logic is useless because it's universal.
Not really. The best models that we have to explain measurable physical phenomenon are wave equations from quantum mechanics that describe probably distributions. There have been proposals for nearly a century that a deterministic model underlies this, but they have to contend with observed phenomenon and inconveniences like Bell's Theorem that suggest that our universe just isn't deterministic.
That said, we don't have a unified theory of the universe. General relativity, the dark matter question, and quantum mechanics have not been unified into any mathematical theory. All of those theories are pretty weird, mathematically. And we have no efficient way to simulate on classical computers the models that we do have.
We don't live in a billiard ball universe. It's a CS fantasy that evaporates as soon as you start doing experiments on matter and energy in the actual universe we are in.
> It's a CS fantasy that evaporates as soon as you start doing experiments
The interesting bit (pun) is that this seems to be a recurring pattern. Whenever there is a intellectual revolution that establishes new ways of thinking / acting, there is a tendency to apply its toolkit to everything in sight, including "the big questions".
So it was with the industrial revolution and the hordes of related engineers: the universe was invariably seen as "a machine". With the digital revolution and the hordes of computer engineers, the universe is invariably "a computer". Effectively just a glorified version of the law of the instrument [1] "to a person with a hammer everything looks like a nail".
The controversial question (and one that cannot be dismissed easily) is whether the universe is in some broader sense "mathematical". The huge success of mathematical descriptions around certain corners of the universe makes people think there might be something to it. But maybe mathematics too, is just our hammer.
> The controversial question (and one that cannot be dismissed easily) is whether the universe is in some broader sense "mathematical"
A similar question: does time evolve in discrete steps, or is it continuous?
The passage of time can only be established by comparing with physical phenomena that evolve over time. Even if that comparison would suggest "discrete steps", that doesn't exclude the possibility of time evolving continuously, but instrument / experiment too 'crude' to see between steps.
Right up there with "are we living in a simulation?", and "does $DEITY exist?". As long as you can't break into a deeper layer (assuming that exists!), it's undecidable, and a philosophical rather than a scientific question, imho.
Its possible that we simply havent evolved yet the mental faculties to reconcile them. After all none of these theories were around even like 200 years ago - we had in fact no clue whatsoever about any of that stuff. Mathematics is an evolving process and it can make leaps. The current knowledge might be like distinct coordinate patches on an underlying manifold that we will asymptotically cover with a consistent collection.
On the other hand it may indeed be a remarkable feature of the human mind that it can "make mathematical sense" of subsets of reality, but there is no overarching system and no reason to expect that we will be able to make simultaneous mathematical sense of "everything".
It might take a long time to collect sufficient such "metadata" to help tilt the balance.
I mostly agree with this but I would like to emphasise that Bell's theorem emphatically doesn't tell you the universe isn't deterministic. It tells you there isn't a local hidden variables model for it (under appropriate assumptions).
There are deterministic models which "escape" Bell's theorem's assumptions. In particular you can assume superdeterminism (which makes it very hard to do physics) or alternatively Everettian "many worlds" quantum mechanics is completely consistent with Bell's theorem.
Super determinism gets around the "problem" of Bell's inequality experiments by saying that the choices of experiment settings chosen by the experimenters are correlated with things we usually wouldn't expect them to be correlated with (like the system being measured).
This makes it really hard to do physics because we usually assume you can choose to measure whatever the hell you want independent of the state of the system.
Say we have some toy physical system which changes color reguarly, for 10 seconds it's red and for the next 10 seconds it's green and so on. We assume that we'll be able to gain physical understanding of such thing by measuring it whenever we like, but in a superdeterministic universe our choice to do the experiment can be correlated with the system, so we might be "forced" to only measure it when it happens to be red. We'll only ever see it be red and we will end up framing an incorrect "law of physics" that says the thing is red.
Yes, models. But a model doesn't need to be "real", it just models something real to a certain extent. But models tend to break down if you look close enough, and I think this may also happen to QM at some point.
Bell's Theorem doesn't rule out determinism, it only rules out hidden variables. If the universe is non-local, Bell's theorem fits well with determinism.
> We don't live in a billiard ball universe
We do - at least as long we look at clumps of matter. The billiard ball universe breaks down if we look at the constituents of matter but somehow it re-remerges if we put enough of those constituents together. It's probably the biggest riddle in Physics why this happens. But it does.
> But models tend to break down if you look close enough, and I think this may also happen to QM at some point.
Sure, but there's absolutely nothing to suggest that it will be some kind of deterministic computation underneath.
> We do - at least as long we look at clumps of matter.
Not even. Even non-quantum clumps of matter are influenced by continuous fields and dilation effects from both special and relativity. Even without QM, our universe is not efficiently simulatable on our computational models because of general relativity.
> Sure, but there's absolutely nothing to suggest that it will be some kind of deterministic computation underneath
The universe behaves very deterministically if we look at "clumps of matter". Why is it this way when this determinism isn't already part of the "base"? For me that's at least a "suggestion". Not a proof of course, but still a hint.
> ... because of general relativity.
General relativity doesn't fits together with QM, so either one is (or both are) "wrong" (in the sense that they only approximate reality to a certain degree).
I'm even sceptical about special relativity: It's a good model and works well in most occasions, but it may still be wrong on a fundamental level. Most of the assumptions under which Einstein proposed SR (no QM, static universe) don't hold anymore.
> The universe behaves very deterministically if we look at "clumps of matter". Why is it this way when this determinism isn't already part of the "base"? For me that's at least a "suggestion". Not a proof of course, but still a hint.
Just because a system is randomized doesn't mean it's not predictable: when measured in certain ways, it will statistically tend to clump around certain states. Suppose that every second, I flip a magic random coin and walk either 2 feet forward or 1 foot backward. Then after a million seconds, you'll quite probably find me about half a million feet from where I started. Small-scale random processes can easily create something predictable on the large scale.
Still, I wouldn't characterize "clumps of matter" as being deterministic even in our everyday lives. There are many chaotic systems in this world, e.g., the weather, which can amplify randomness on the molecular level into a completely different state. Even the orbit of the Earth becomes unpredictable after several million years.
> I'm even sceptical about special relativity: It's a good model and works well in most occasions, but it may still be wrong on a fundamental level. Most of the assumptions under which Einstein proposed SR (no QM, static universe) don't hold anymore.
Special relativity is already 'wrong' in that it doesn't predict any of our observations of general relativity. But it unavoidably has plenty of truth in it, in that it is very succesful at predicting an identical speed of light for all observers, and the effects (e.g., time dilation) that that implies. Any superseding theory has to explain the same observations, at which point special relativity will continue to act as a useful model for the large-scale effects.
> Just because a system is randomized doesn't mean it's not predictable
That's of course true (In fact I tend to also believe in a non-deterministic universe "at the core").
But if determinism falls out in the end, it's still a hint that there may also be deterministic effects at the root. Current observation can't rule that out, it's just our model which assumes pure randomness. But there are lot's of possibilities how randomness can sneak in into QM which doesn't contradict obserservation.
And unless we solve the measurement problem in QM (by finding a unified theory from which both Schoedingers equations and Borns rule can be derived), it's still an open question. So considering it solved today is quite premature.
> ... chaotic systems ...
That's still deterministic. Sure, there may be some influence from quantum effects which then are amplified, but the dynamic of the chaotic system itself is still deterministic.
> (SR) ... predicting an identical speed of light for all observers
That's not really true. "identical speed of light for all observers" is an observation which was replicated quite often. SR is a way to explain this observation, but there before SR Lorenz already had a different model explaining it too. SR won, because Lorenz used an (at the time) unobservable "ether" and Einstein argued that its better to use Occams Razor and throw this "ether" away.
But Einstein didn't now about QFT, the Big-Bang and the microwave-background - which all contradict Einsteins assumptions: QFT uses an "ether-like" vacuum, the Big-Bang created a "T=0" for the universe and with the microwave-background also an absolute reference frame for an absolute time. This in all contradicts SR, so maybe SR is really wrong on a global level.
Which in turn would allow a non-local, realistic interpretation of quantum measurements because without SR simultaneity could be back on the table.
> But if determinism falls out in the end, it's still a hint that there may also be deterministic effects at the root.
What I'm saying is that it's a hint of absolutely nothing. Deterministic systems can very easily produce deterministic large-scale behavior, and randomized systems can also very easily produce deterministic large-scale behavior. Since the large-scale behavior is the same either way, it gives us no predictive power over its ultimate cause, in the Bayesian sense.
> That's still deterministic. Sure, there may be some influence from quantum effects which then are amplified, but the dynamic of the chaotic system itself is still deterministic.
Your argument is that because we see "determinism falling out in the end", we should also expect "determinism at the root". But I argue that in the real world, we don't even see "determinism falling out in the end". On short timescales, computers appear to simulate finite-state machines, and the Earth appears to move in a steady pattern around the sun. But looking further out, the computer ultimately turns to dust, and the Earth wobbles out of its current path, thanks to the chaotic dynamics of the solar system. That doesn't sound very deterministic to me, unless we baselessly assume a priori that they have a deterministic cause.
What determinism do you argue does truly fall out in the end?
> That's not really true. "identical speed of light for all observers" is an observation which was replicated quite often. SR is a way to explain this observation, but there before SR Lorenz already had a different model explaining it too. SR won, because Lorenz used an (at the time) unobservable "ether" and Einstein argued that its better to use Occams Razor and throw this "ether" away.
In that case, we have two different interpetations that yield the exact same outcomes. Thus, I'd say that they're really just two different descriptions of the same model: they're equally correct, and Lorenz's description is just dispreferred due to being more difficult to work with.
> This in all contradicts SR, so maybe SR is really wrong on a global level.
There's nothing in SR that says that "most" matter can't follow the same reference frame. It just says that your reference frame has no bearing on the laws of physics you perceive, contrary to older models of the ether.
As I said, we already know that SR is wrong in that it doesn't predict any of the effects from GR, cosmology, etc. It's not an end-all-be-all theory of everything. But it doesn't stop it from giving good predictions for most places in the universe.
> Which in turn would allow a non-local, realistic interpretation of quantum measurements because without SR simultaneity could be back on the table.
You can do all that today, by specifying a reference frame that you want to consider. After all, that's how QFT does it, since it's mostly concerned about local effects. But you won't get different results from what SR predicts (in particular, the physics won't change if you look at the same system in a different reference frame), except in the circumstances where we already know it's incomplete.
> What determinism do you argue does truly fall out in the end?
Mechanics is fully deterministic. The question is if there is some kind of "QM random generator" which mixes into this, making things nondeterministic in the end. But it's possible to separate both and the "big clumps of matter" part is fully deterministic then because decoherence generally happens so fast that it doesn't matter. You need to prepare systems quite carefully to mix quantum randomness into it (like in Schroedingers cat for example).
> In that case, we have two different interpetations that yield the exact same outcomes
Only for "harmless cases". SR allows lots of strange stuff, especially if combined with gravity. Closed timelike curves for example.
But if time is absolute and only slowed down for objects moving against this background, then closed timelike curves couldn't exit. Also the trick with Kruskal–Szekeres coordinates wouldn't work anymore because switching time and space would by unphysical. This way we wouldn't have to care about the singularity (at least in Schwarzschild BHs) anymore, because space would cease to exists behind the horizon of a BH and there would be no Singularity.
> You can do all that today, by specifying a reference frame that you want to consider
But that wouldn't work with measurement of entangled object, because there would be no way to define an absolute frame in which the change of the wave-function into an eigenstate happens, it would always depends on the frame of the observer. QM requires that the change happens simultaneously, but SR doesn't allow simultaneous events.
Of course the problem with all of this is, that in the moment I can't see a way to do experiments which decides if there is absolute time or if the SR is correct.
Yes, the biggest problem with this mere mechanistic picture is why anything should move at all. The csci view is essentially a divine command one: the machine runs because of it's 'program' which is somehow external to the machine. But this is a metaphor/illusion for programmable devices like electronic computers, which breaks down for the actual universe.
The universe must be inherently dynamical, ie., not mere discrete states that change by rules, but properties whose nature is to interact with the world...
This is why the abstract mathematical conception of "computer" cannot describe anything physical: mathematical objects do not have temporal properties; they do not change.
This leads to the illusion, for electric devices, that the change is extrinsic to the physics of the system since we can specify the algorithm (somewhat) abstractly. But it isnt.. your pieces of meat, are in the end, the causal power which moves the liquid crystal, electric field, etc.
I think this a caricature or at least a shallow view of computationalism. Nobody is claiming that the universe is a computer of the same kind as the ones we have. Instead, I think, it claims that the universe has to necessarily be computational in order to be real - everything has to be computational to be real, otherwise how can it ever be implemented.
On your "divine command" bit: I don't think any theory has a good claim on the ultimate ontology of reality, so I don't see how one view is better than another. It's all some kind of divine command at the moment. Computationalism at least constrains the necessary properties the divine must have.
It begs the question as to what is "computational" and what is "mathematical". E.g. Tegmark has good points in the "Mathematical Universe" that describe possible interpretations of what it means to be mathematical. But how is computation different than math? I'd argue that computation is primarily stateful, logical, and (mostly) deterministic with a clear flow of time, whereas mathematical models are often time-reversible (as most, if not of QM is), stateless, and not necessarily "logical" in the sense of relating finite symbolic quantities. Rather mathematical models happily relate infinite series, manifolds, surfaces, high-dimensional spaces, they talk about fields and operate on functions, etc.
The MWI is the second most popular interpretation of QM, and it's deterministic, because Schrodinger's equation evolves deterministically. The probability comes in when making measurements. But in MWI, that's just a function of decoherence hiding the other branches.
What would happen if you could build an artificial being that was capable of taking in input (action) and provide output (reaction). I don’t think predictability should be a defining property of computers, or we would not be trying to build AI. How components interact with each other, or how it is powered should also not define a computer.
Your brain is a computer and we still don’t have much of a clue how it works.
We started computing with rudimentary tools and then kept extending that model to what it is today. What would happen if our concepts were entirely different?
So you're working backwards from the claim that reality is a computer to defining computer.
Yes, by the time you enumerate all the properties of reality, and call them a "computer", you'll find that this is not a computer in the sense of computer science.
That's a bit like determining a ship can only be made of wood, isn't it? A ship needs to float, it needs to be made of buoyant material and it needs to have a sail. If a "sailor" is adjusting the power to a propellor that's turned by an electric motor that's running off a turbine driven by steam from a magical rock, it's not what we define as naval science, is it?
The thing missing here is that it's only a ship for humans. Ships are cultural artifacts. If we said the universe was a ship that was sailing itself, that would be misapplying something we make for nature itself. Independent of human culture, ships are just arrangements of matter. We make particular arrangements meaningful.
Same thing applies to computers and computation. It's only meaningful to use those words for matter we arrange to do certain stuff for us.
Predictability of output is very much a computing thing. If I try to compute the 11 trillionth digit of Pi in base-7, I should get the same answer every time. If I do not get the same answer, then the device is either malfunctioning (often) or just not a computer at all.
It it sort of assumed our brains are computers, but no one has ever demonstrated this to be the case to any degree. Human brains seem to, at least partially, operate on different principles.
I think a device which outputs a distribution of answers is still a computer. The information is an answer and needs to be computed. Determinism and repeatability isn't a requirement for defining computation.
Something can be predictable in some ways and not in others. If you build AI that is capable of giving you answers to complex world issues, the intermediary steps might not be predictable, and perhaps neigher the final answer, but you might just need it to be correct, because it will be an answer dependent on the external world, which is itself not very predictable.
> Your brain is a computer and we still don’t have much of a clue how it works.
Notwithstanding the fact that Daniel Dennet thinks of the brain as a virtual machine on which memes execute and evolve via cultural evolution, the common understanding of a computer does not include gray matter.
> What would happen if our concepts were entirely different?
That’s why we use different words to denote different concepts.
before computers were programmable, there were still computers. before they were machines, there were computers. a computer is a thing that carries out a computation. obviously, the universe computes spacetime.
I was taught the universe was a computer in 3 different college courses 25 years ago. Not exactly a revelation.
"According to the best physics, the universe: isnt programmable; doesn't evolve deterministically; isnt described by functions over integers; isn't electronic; doesnt transmit power through programmable operation; isn't abstract; doesnt have causal powers through mere arrangements of parts; .. and so on."
You can't say any of these things because electronic computers exist within the universe, but I get what your point is. That said, there are physicists beginning to assume the universe IS a computer and work forward from that assumption.
According to the best physics, the universe: isnt programmable; doesn't evolve deterministically; isnt described by functions over integers; isn't electronic; doesnt transmit power through programmable operation; isn't abstract; doesnt have causal powers through mere arrangements of parts; .. and so on.
Whatever it could mean, "computer" here is a strange term. I've no idea why people are so keen on it.
I take it to be a sort of humean-idealist-passivist theory: the world is an abstract set of discrete states that are just mere arrangements with no necessary connections; there is only pattern in these discrete states; these patterns are abstract and can be realised by mere number alone.
This is roughly somehting like an early 20th C. logicial positivist view, which was somewhat influential -- but it's wholey false, and throw out in scientific practice