You don't have to imagine an ordered grid. If grid unit is small enough (say plank length 1,6 10^-35) and the grid is chaotic, for the distances of ~ 10^-16 that we can measure, everything will look the same in all directions.
This happens the same way in which steel demonstrates isotropic behavior although its microscopic structure is anisotropic.
So there is no easy way to prove or disprove continuity of space.
The "underlying issue" often at stake in the debate is whether reality is a computer, since it would need to be discrete if so, and often whether a computer can be made to simulate it.
However, what's missed here is that discrete is a necessary but not sufficient condition.
Once you give any sort of plausible account of how reality could be discrete, as you've done here, you end up with non-computable aspects (eg., typically randomness). So the metagame is lost regardless: reality isnt a computer (/ no complete physical theories of reality are computable).
Though the meta-meta-game around "simulation" is probably internally incoherent in itself -- whether reality is a computer or not would really have nothing to do with whether any properties had by it (eg., mass) are simulated.
Since either you take reality to have this property and hence "simulation" doesn't make sense, or you take it to be faked. If it's faked, being computable or not is irrelevant. There's an infinite number of conceivable ways that, globally, all properties could be faked (eg., by a demon that is dreaming).
I don't see how randomness can make anything non-computable. Sure you may not know the exact random numbers but you get a similar enough universe with any other sequence of random numbers.
Also continuous doesn't mean uncomputable either, because in many cases the infinite amount of computation for continuum does not add anything interesting and finite approximation works good enough.
> So the metagame is lost regardless: reality isnt a computer (/ no complete physical theories of reality are computable).
I don't see any evidence for this. For now we do not have a proof for one way or another. If for instance it turns out that quantum computers really can run Shor's algorithm factoring very large numbers, it would be a good evidence for continuum, but we are not there yet.
But even that would not be an evidence for reality not being a computer, since it will still allow the possibility of reality being a computer that can perform operations on real numbers.
Why is randomness non-computable? In computer science, the theorem is that the set of all Deterministic Finite Automata is equivalent to the set of all Nondeterministic Finite Automata. It is a non-obvious theorem that is a one page proof taught in every junior level theory of computation course. This theorem is what lets deterministic and nondeterministic Turing machines to be used interchangeably in many subsequent proof sketches in these classes.
The "Nondeterministic" in NFA means its transition function goes from states to sets of states, instead of from states to states. Informally, it can explore multiple paths in parallel for the cost of one. They're not probabilistic.
The computational semantics of the NFA simply requires that the next state be one of the allowable next states in the transition function d: Q x ∑ --> PowerSet(Q).
Thus, this semantics implicitly encodes the notion that the machine is nondeterministically choosing the next state in each execution.
The decision problem of whether an NFA accepts a string w is what allows for the informal parallel interpretation, that it accepts iff you imagine the computation is forking off a new thread at each nondeterministic branch. But to say that this not nondeterministic or not probabilistic is like saying the Many-Worlds Interpretation means there is no real superposition, or something like that. It's like saying a throw of a dice does not really involve probability because of a symmetry argument that a dice has six equal sides. Mainly, I don't understand that, because I see probability as a way to implement nondeterminism: a system is probabilistic only because it is making nondeterministic choices according to some probability distribution. And checking Sipser 2nd ed. p.368: "A probabilistic Turing machine is a type of nondeterministic Turing machine in which each step is a coin-flip step".
Anyhow, my main issue was that the original commenter casually claimed that probability makes things (physics) uncomputable. But Turing computability has nothing to do with probability, since as I recall the closest concept is the Non-deterministic Turing Machine (NDTM) and with that it is a basic proof to show that NFAs vs. DFAs, as well as NDTMs vs. DTMs, are computationally equivalent and there are theorems for that.
Meaning either they are using an idiosyncratic definition of computability or are ignorant of an introductory course on theory of computing which explains formally what Turing/Church's theories were about when clarifying the concept of computability. Okay or maybe they have a deeper philosophical disagreement with computability and complexity theorists - maybe they reject Sipser's definition above - but these are standard undergraduate curricula in CS by now and it could be argued that perhaps it is the non-CS experts who haven't thought deeply enough about what computability really is and would benefit from actually learning from these subdisciplines. I don't know, as they did not reply.
A chaotic grid would be macroscopically observable because random + random != 2 random, it's equal to 'bell curve'. Everything would be smeared as a function of distance, which we don't see.
This characteristic is observable for metals as well. Steel becomes less flexible as it's worked because it's grains become smaller and more chaotic - A microscopic property with a macroscopic effect.
If we are talking about a grid with a very small spacing, say around the Planck length, I don't see how we would be able to macroscopically observe it.
Everything we can see move on the grid is at least 20 orders of magnitude bigger than the grid spacing. Any macroscopic objects we can experiment with are more like 30+ orders of magnitude bigger than the grid spacing and consist of numerous atoms that will all be moving within the object due to thermal jiggling over distances orders of magnitude bigger than the grid spacing.
In physics you never have measurements differentiating between distance 2 and say 2+10^-20, and that gives enough space to hide any 'bell curve' you want.
This happens the same way in which steel demonstrates isotropic behavior although its microscopic structure is anisotropic.
So there is no easy way to prove or disprove continuity of space.