I did some experiments with creating looped animations a few years ago:
With the examples I posted above, since they wrap/loop in the time dimension, there is always some similarity/consistency. For example, in the one with the rabbits, if a rabbit dies, that means there will have to also be a birth to get back to the correct number of rabbits before it loops.
I used this repo .
The algorithm is literally just a constraint solver. I'm pretty sure it's pretty similar to the sudoku solver I wrote in prolog for a university course.
It's kinda pretentiously named and described, probably because its more academic to do that. It's just a constraint solver.
Fun stuff, but I struggled to get a lot of value out of using it for level gen. You get cool patterns, but levels need structure and intent to be interesting. Adding constraints to the algorithm becomes a big-oh nightmare and you end up with frequently unsolvable paths as the algorithm recurses.
The game Bad North used it to good effect, so depending on the game it may be a very useful tool in the toolbelt.
You specify which cells in a 3D grid are occupied by clicking, and it fills in the details to make a charming little town.
Here's a good (free) explanation:
Specifically read the 'Least Entropy' section: It's broken down very simply:
Pick a random tile which is the 'least random', because (due to already having resolved the constraints we can), this is most likely to be a cell that won't cause problems (ie. unsatisfiable constraints) later.
You might also like to read this (also) free paper on implementing WFC using a constraint solving library: https://canvas.ucsc.edu/files/109152/download?download_frd=1, which I quote here:
> The heuristic of selecting the most constrained variable or equivalently the variable with minimum remaining values (MRV) is well known in constraint solving.
> Since there is more than one valid pattern for that location—or it would already have been set to zero entropy in the previous loop—one of those patterns needs to be chosen. One of the patterns is chosen with a random sample, weighted by the frequency that pattern appears in the input image.
> This implements Gumin’s secondary goal for local similarity: that patterns appear with a
similar distribution in the output as are found in the input .
If you're familiar with constraint solving, "superposition" is just "the remaining possible choices in the domain" and "entropy" is just describing how to select the next node.
No way to read online for free?
Academics don't make any money from sales of paywalled journal articles. If you click a link to a journal article and hit a paywall, it's usually because whoever shared the link has institutional access to the journal and forgot that the paywall was there (since it effectively isn't for them.)
Yes, it is a big inspiration, and there are numerous beautiful examples of its results. However, the QM wording is at best an extremely loose analogy (one that a hacker would use). If you want to compare it to other methods, see "WaveFunctionCollapse is constraint solving in the wild" https://dl.acm.org/doi/10.1145/3102071.3110566.
"WaveFunctionCollapse is constraint solving in the wild."
But I guess I am the only one who used the Zelda, a link to the past overworld as the training input ;-)
My problem with that was that the algorithm basically always ran into unsolvable states. The WFC solution to that is to just start again, but most WFC implementations use only few (<10) tiles. The Zelda map had several hundred, you can actually see the algorithm searching for valid solutions in the videos I linked.