It is difficult to distinguish the two cases in practice.
I don't think it follows that either the universe or our minds are computational. For one, we don't have a satisfactory definition of "mind", so that question has to remain open until we do.
And the universe could be "computational" in a completely unfamiliar way. It might even be humanly unimaginable.
As a wild speculation, consider the extreme case: the universe actually functions as an indivisible whole.
We use math to make predictions about repeating causal patterns, and invariably they work in a time- and precision-limited way. If the universe was a true thing-in-itself on its own terms, it could be "computational", but our models would always be incomplete and imperfect.
We would never be able to understand it fully, because we would never be able to build a complete and accurate representation of its computational mechanisms.
Could you give me a definition for “definition”?
At some point you have to be content that your inability to define every single term in your vocabulary doesn’t hinder your ability to use that term in practice.
All we have in way of creating/communicating knowledge is symbol manipulation. Language.
Turing machines are language recognisers, and I am comfortable using them as practically sufficient (for the purposes of conversation) models of minds.
To be sure you see my point, defining “definition” is recursive and that puts you on the computer science turf.
I am a computer. Mathematics is a language; or more precisely - it is a grammar. Our Mathematical models of the universe are computational, and since Mathematics can only prove things up to isomorphism that is as far as colourless, formal, linguistic reductionism will take us.
As for Kantian transcendentalism. Look no further than Jean-Yves Girard’s work on Linear Logic, Geometry of Interaction and transcendental syntax. Is all computer science intersecting with Quantum Physics.
You are a biological system. A massive, noisy and dynamical system of interactions we can never hope to model in fidelity. You exist because of unstable equilibria and tiny little proteins pushing against entropy.
> All we have in way of creating/communicating knowledge is symbol manipulation. Language.
We can communicate with more than abstract symbol language. A dog grows and bears its teeth. A thunderstorm communicates its approach.
We're pattern recognizers, but messy, soupy ones.
That is the language of systems theory.
You are welcome to use the Church-Turing-Deutsch principle as a semantic.
>A massive, noisy and dynamical system of interactions we can never hope to model in fidelity.
We are already modeling it.
The only argument here is the degree of fidelity required or possible.
> You exist because of unstable equilibria and tiny little proteins pushing against entropy
>We can communicate...
Begging the question: why do proteins exist?
Entropy, Information, Communication. Heh!
That is the language of Information Theory.
>We’re pattern recognisers
Yes. That is what I said - computer/language recogniser.
If there's no pattern/structure you can't parse it.
However you define/conceptualise us, every time you use the word “I” or “we” you are exemplifying recursion.
That is the great thing about self-locating beliefs - they are unfalsifiable.
Quantum Entanglement is computation, which makes me a monist and a materialist/physicalist. Computers reify our formal languages by manipulating matter.
In the language of comp-sci: Code is data - data is code. Homoiconicity.
In the language that has fallen out of fashion as of late...
In the beginning was the Word, and the Word was with God, and the Word was God. —John 1:1
Detecting anything is signal processing.
And if I am wrong about this - all the better! It is exactly the right kind of wrongness we need to progress science.
I don't need to 'store' the infinite set of natural numbers - that would require infinite space (memory).
But I can describe the infinite set in a single line of Python. Nat = lambda x=0: Nat(x+1)
It's just a space-time tradeoff.
The non-compressibility of a sequence is a falsifiable claim.
All it takes is a single demonstration of successful compression.
Perhaps you could've compressed it had you kept running for just a second longer?
Obviously, the code will not halt until it compresses the stream successfully.
run them for N steps.
if none terminate on the target bitstring of length 100, then we've eliminated the computation hypothesis for 10 bits and N runtime
if we continue this approach and eliminate all the available storage and time available, then we eliminate the computation hypothesis altogether for our scenario
A binary alphabet has 2^10 such strings.
A decimal alphabet has 10^10 such strings.
So before you can make the assertion you've made you first have to answer two question:
1. How many symbols does your alphabet contain?
2. How did you choose that number?
Down that path you arrive the Linear speedup theorem.
yes, there is always a turing machine that can generate the target with any performance characteristics
so, the key is to keep the reference turing machine fixed
Hence the argument for compression.
A Turing machine with a better language is faster.
It compresses time.
So, thats why when talking about compression in general, we keep the reference TM fixed.
When testing, there is the possibility with a single test maybe we have the wrong reference machine. But as the number of tests grows, that probability approaches zero. So, with enough testing we can eliminate the linear speedup theorem loophole, at least with high probability.
As a practical application of this sort of thing, check out the 'normalized information distance'. It is based on algorithmic mutual information, which in theory is not approximatable, as you argue. However, in practice it works surprisingly well, even without prior knowledge of the domain.
Thanks a lot