So, for "pure" tasks which take input and produce output only at the very end, the Hashlife algorithm [0] allows you to run arbitrarily many generations of life in constant time at the expense of a huge internal cache (memory usage). That is, with Hashlife you can trade memory for computational efficiency. That means that for a sufficiently difficult task like mining bitcoin, this computer could be simulated faster than any conventional computer could run at the expense of using more memory.
My question: does anybody know what the ratio of trade-off between time and space is for Hashlife? I haven't been able to find anything from my searches.
You could do the same without transforming to game of life. Iterate over all possible states the memory can be in, and emulate a cpu with that memory for N steps.
But if you have 640 kilobytes memory (ought to be enough for anyone), that means your cache will have 2^640000 entries.
When transforming to game of life every bit of memory becomes many cells in the grid, so the requirements for that would be even worse.
Hashlife can theoretically "cache" a specific input and output, e.g. running an instruction on your simulated computer like "add r0, r1, r2" (when r0, r1, and r2 contain specific values that have already been used with this instruction, and assuming that any nearby internal state is also identical).
But this doesn't let you magically mine bitcoin in fast-forward, because in order to mine bitcoin, you need to insert random bytes into your block and keep changing them until you get lucky and find a hash that starts with a certain number of zeroes. This effectively guarantees that you aren't going to have cached computation snippets that are big enough to skip any meaningful amount of work.
At best, you might be able to fast-forward at the instruction level granularity, at which point you might as well be mining on your CPU.
I'm not so sure this is correct. [0] says that Hashlife has logarithmic complexity on average. In the worst case it is O(n) since it could have to look at every element to see a pattern if one even exists [1]. Therefore the worst case for any algorithm running in a Life turing machine using HashLife is the same as if it was running directly on the machine.
Hmm, the phrasing on that first link does not lead to its stated conclusion:
> Hashlife can take the same length of time to compute from generation N to generation 2N for any large enough N - it has logarithmic complexity.
If 2N takes as much time as N, then 4N takes as much time as N, and so 8N takes as much as N, and so... (for any large enough N). This seems to indicate--asymptotically--constant time, not logarithmic.
That question is still open for submissions, and this computer looks to be significantly smaller than the one submitted there. If someone were willing to expand the computer a bit (memory currently seems a bit small) and then port Tetris to it then I suspect they'd be able to beat the QFT processor.
It’s interesting watching the messy complexity resolve into patterns and order as you zoom out.
It’s reminiscent of what it feels like to dig deeply into a metabolic or reproductive process in a cell and be overcome with the sheer complexity of it, then look at the back of your hand and realize it’s all happening right there, now.
The first time I heard of Conway's Game of Life I was blown away. It was an example of potentially incredible complexity that can be arranged to large scale patterns emerge. It was the first time I took the idea of the universe being simulated (or created) seriously. Even more fascinating for me is the idea that we can't analytically calculate the outcome. To find out what happens we have to simulate it.
Even if the real world has a creator (or is a simulation), my money would be on it being impossible for the creator to know what is going to happen. Life would be the simulation.
I would've loved videos like this when I was studying Biology! This kind of explanation with the video is amazingly clear. It reminds me a lot of how the signal is generated in eyes through the build up of charge after light hits the retina.
Every time I go down this rabbit hole I get re-convinced of panspermia. Maybe when I retire I'll study enough statistics and molecular biology to try to get my head around it. Or by then we'll be talking to aliens and the question will be moot.
you can do a mooc on molecular biology pretty easily if your goal is to get a superficial understanding of it. you don’t need to wait until you retire :)
>we can't analytically calculate the outcome. To find out what happens we have to simulate it.
I've read Wolfram claim this, but as a conjecture, iirc. Is there a formal proof that in order to predict life patterns we essentially need to run the cellular automaton?
Amazing how if you just stare at the logic gates you can kinda figure out what they mean. Makes you think maybe our meatware and the computers we use aren't as far apart as they seem.
Just like building a real computer, you don't have to hold all of the concepts in your mind at once, just start with basic building blocks and work your way up the ladder of abstraction to parentheses or space invaders or whatever floats your boat. :)
That can fall down a little when it comes to debugging. You end up having to break everything back down to component parts more than if you could keep larger parts in your head at once.
I probably could do it, understanding all the parts implies that. Eventually. Just not in a reasonable amount of time, and perhaps not as neatly.
I wonder what the most elegant Turing-complete universe is.
I believe it to be a system based on collision-based computing [1].
My hunch is this: Matter and Energy have a duality that is analogous to code and data. Einstein likened the speed of light to the speed of information. Wolfram/Toffoli/Fredkin or well, any other Cellular Automata specialist, studied the rate of propagation in CA, Lyapunov exponents. [2]
Now it seems to me, that if we can create a digital system which provides for code <-> data duality analogous to Mass–energy equivalence [3], then we should be able to get stable structures that hold data, and stable processes that process data. I have tried for many years on and off with various CAs. Many of the CAs have been unconventional, like cells containing arrays of particles, particles being objects rather than simple numbers or enum types. And CAs based on reals or complex numbers or associated vector fields.
In some respects. Polymorphic code is more like it, ie. DNA. DNA is data, but it creates processes (code) that produce proteins that then alter the data through histones and methylation and transcription factors. It becomes insanely non-linear and chaotic, the organism becoming an amalgamation of code and data in flux.
This reminds me of 'Permutation City', one of my favourite sci-fi novels. Cellular automata play a pivotal role in this book, which, despite some small misgivings, has largely aged rather well.
Given the fact that we can build a computer in the game of life, this breaks down to a simpler question: can we have intelligence in a computer? If so, it can also occur in the game of life.
It would be difficult. In particular, unlike in our world, it's very difficult to create anything like a stable membrane or barrier in the game of life. And patterns are extremely susceptible to falling apart if perturbed. So you would generally expect a game-of-life-based entity touching anything to die as a result.
Since the game of life is Turing complete, it can simulate any computer. If a computer can eventually simulate the world (including humans), so can the game of life.
This doesn't mean the patterns would visually look like cells and membranes though...
Turing machines in the game of life have the same problem of being unprotected and extremely sensitive to perturbations. It drastically lowers the chance of generating a working one (even when running for extremely long times on extremely large boards). Contrast with our world, where it seems like just starting with a whole lot of randomly distributed hydrogen could be enough.
It's hard to imagine an emulated version of a human having a consciousness, however. What is it that separates transistors firing randomy from a CPU executing specific consciousness-generating instructions? I find these things thought-provoking.
Yes. I don't expect anyone can simulate that at real-time speed though (unless they take a lot of shortcuts, e.g. simulating at a higher level, which becomes a bit pointless).
The interestingly useless and uselessly interesting are not only welcome here, they're at the apex of the tradition that HN (at its best) helps to continue. A post like this never needs an answer to "why"!
They knew it was a low-quality question and so created a new account in order to post that comment. And despite knowing it was a crap comment, cared enough to make the effort to create the new account. I don't get some motivates some people.
We do not know how ‘soup of chemistry’ does information processing. (e.g. biology)
The more it looks like soup, the more we might be able to intuit how to tackle things that are unbounded in complexity. E.g. how do millions of cells coordinate growth to make a new organ/repair and then stop, what are the chemical initial conditions that generate this complex orchestra, what tells it to stop, can we speak that language if we knew it?
An obvious example would be intellectual cross-pollination. This project is likely to get a lot of exposure in the Game of Life community as well as the Computer Science community, increasing the chances that members of either community feel compelled to cross over and contribute in an area that may have previously seemed uninteresting/unpromising/unknown.
While that's true for posts like this in general, in this specific case those communities are the same. Conway's Game of Life is a computer science curio.
Given enough pebbles and a large enough beach, you could make a drawing--and so a design--of just about anything. But this is arguably both a design and a concrete implementation of a kind of machine.
Your phone is just a drawing/design until you put energy into it as well. Then things start to "compute" due to the pattern of interactions that energy creates. There is no law of the universe that says this energy must be electrical.
Nothing about the rules of GOL performs the necessary pattern transformations. You could say the pebbles follow the rules of GOL and put some energy into performing those transformations. This is what a computer does with GOL.
There is nothing to knock of a starting condition for the GOL until you put energy into it. There is no reason, for instance, you couldn't have a little robot moving the pebbles according to GOL rules. If you say the robot is the thing doing the computation the such is the case of the computer running this GOL simulation.
If you say it's not pebbles then imagine a mechanical computer built out of them. Provide some kinetic energy and rocks begin to compute just as the silicon in your computer begins to compute when you put electrical energy into it.
Perhaps this specific exercise does not necessarily generate any utility in and of itself, but the concept is certainly quite interesting. Many would argue a bunch of pebbles on the sea shore couldn't possibly be alive, yet many still hope we can bring life to a machine of silicon. Clearly there must be some difference between the two, unless there isn't. Relevant XKCD: https://xkcd.com/505/
In case someone doesn't already know it, here is Conway's Game of Life simulated in Conway's Game of Life:
https://www.youtube.com/watch?v=xP5-iIeKXE8