The other founding paper of the field is David Deutsch's superb "Quantum theory, the Church-Turing principle and the universal quantum computer": http://folk.uio.no/ovrum/articles/deutsch85.pdf
Both papers are excellent reading, and surprisingly accessible.
Feynman worked on this subject working on the Connection Machine supecomputer. 
- Protein folding 
- Neuroscience 
- Battery technology 
PS: Berkeley has been a major force in that and from where many similar projects originated (c.f. BOINC project ).
Things that far away are tiny, in a way. :)
might not be the most modern approach, but he's still teaching and updating it
So is this a fatal flaw in Wolfram's "new kind of science" based on cellular automata?
He thinks the Universe is more like a graph of nodes and edges, with particles, interactions, spacetime, etc. emerging from the rewriting of simple patterns in the network http://blog.stephenwolfram.com/2015/12/what-is-spacetime-rea...
I have no idea if it's a fair summary of D&G.
I don't really want to touch on anything having to do with scriptures in the most general sense (vedas, charkras, nadis, and the works that describe them); I don't think they are likely to have any utility in understanding gravitation, acceleration, or uniform linear motion at all, and if there is anything in there, it is probably easier to re-discover in a modern theoretical framework than to translate.
Instead, the interesting thing is the Achilles-vs-tortoise argument in a context in approximately 1920 but before the work by Lemaître, Friedmann, Robertson and Walker that led to the underpinnings of the standard cosmology, and most particularly before the late 1920s when the Hubble flow was described and found to apply to all then-known distant galaxies.
Einstein in 1920 had a personal bias towards a static universe for a variety of reasons, although in part that is because evidence at the time did not disfavour one. In such a universe, making some assumptions about the behaviour of its non-gravitational content, there is probably no "universal clock", and so a resort to GR in an Achilles-vs-hare argument likely would not prove illuminating (and would be much harder to do quickly).
However, our universe is so close to being isotropic and homogeneous (as far as we can tell) that we almost certainly can rely upon the scale factor from the Friedmann-Lemaître-Robertson-Walker model to be equally valid for all observers. There are additionally relic fields which manifest the scale factor (e.g., the average temperature of the CMB radiation).
The resolution to Achilles-vs-hare is that both can agree on the scale factor at the boundary conditions, namely, when they are together at the start of the race and when they are together again after the race has ended. What they will disagree on is only the amount of wristwatch time has elapsed.
The tortoise has simply travelled much further in spacetime than Achilles, and all observers at both boundary conditions will agree with that, no matter how they have travelled from the start to the end. Even more strongly, any observer who can place the pair of them together start of the race at time a(t_start) and the pair of them together at a(t_end) will agree that the tortoise has travelled further in spacetime, although their count of the elapsed time in, say, picoseconds, between a(t_start) and a(t_end) may be unique.
But even if we drop the examination of the scale factor, and we resort only to Special Relativity, we can see with a Minkowski diagram (available in 1920!) that the slopes of the race are different. The tortoise's slope is more vertical (where the y axis is the vertical axis is the timelike axis). If we choose convenient units where c=1 and we use seconds as the coordinates (so, actual seconds on the y axis, light-seconds on the x axis, but with light travelling at one light second per second by choice of units), and we use a metric like ds^2 = cdt^2 - dx^2, and Achilles takes 300 seconds to run from origin to finish (on the x axis) while the tortoise takes 3000 seconds to run from origin to finish (on the y axis), "s" is much bigger for the tortoise from start to finish, but approximately the same when you trace from boundary condition (the pair together at the start) to boundary condition (the pair together at the end). But, using just SR, the less time Achilles takes to run the race, the smaller his "s" is compared to the tortoise's. He is travelling a shorter distance in spacetime, even though the number of ticks along the x axis are the same for him and for tortoise, and we can calculate the exact difference in distance using the Lorentz formula.
We are not required to use the flat space metric, or Euclidean coordinates (heck, what's the difference between between "x" and "r" (from polar coordinates) in the example above?); general covariance (from General Relativity) guarantees that no matter what set of smooth coordinates we use, or what units we use, faster Achilles traverses less spacetime between the boundary conditions than slower tortoise.
The critical point that I do not see in the debate is that the fixing of the boundary conditions are important (and we now know this because of work done since Einstein's death, particularly since the development of 3+1 formalisms). The critical boundaries are when the pair are together again, not when either of the pair is at the start gate and the end ribbon.
To us in 2016, this is a simple Cauchy problem. However, in 1920 it would have been at least novel (IIRC, Hadamard's lectures hadn't happened yet, for example) and certainly not a first choice tool to resolve a seeming paradox.
edit: I reread the article at the top and realize I should substitute April 1922 for 1920 (and various approximations) above. I don't think the exact date is terribly important; the key thing is that April 1922 is before the Hubble flow was understood, and before initial-values-surface/boundary-conditions approaches and close relatives were in use in a GR context.
Therefore my question is, Can physics be simulated ...
The C, on the word Can, is capitalized after a comma?