Great work! The fact that it captures "long-range order" seemingly perfectly is something not many have been able to do before! And the "collapse" visualization is great fun to watch.
But is your algorithm really qualitatively all that different from previous search methods (e.g. Efros and Leung), if you are still (uniform random?) sampling over the input distribution of patches?
I notice also your input textures tend to have sharp boundaries (as is common in pixel art). It would be interesting to see the results when the input has a lot of "noise", such as high def images of rocks or clouds ;)
While I still prefer search methods because they are easy to implement (and give better results), "deep" methods are definitely gaining some ground:
Efros' and Leung's method doesn't satisfy the (C1) condition. The closest previous work is Paul Merrel's model synthesis.
WFC and texture synthesis serve similar purposes: they produce images similar to the input image. However, the definition of what is "similar" is different in each case. If you have a high def input with noise (like realistic rocks and clouds) then you really want to to use texture synthesis methods. If you have an indexed image with few colors and you want to capture... something like the inner rules of that image and long range correlations (if you have an output of a cellular automata, for example, or a dungeon), then you want to use WFC-like methods.
> something like the inner rules of that image and long range correlations
I assume that if you feed WFC a large input image, it just thinks of that as a very complex set of rules that are harder to satisfy than those of a small input?
Is there a way, then, to instead train the WFC algorithm on a large corpus of small, similar samples, such that it can try to derive the rules common to all the inputs in the corpus, and produce one image that "really" fits the rules, rather than just the ephemeral quirks from an individual sample?
Would there be, for example, a way to train WFC to produce outputs matching the level-design "aesthetic" of a given game, rather than just "continuing" a particular level?
About harder and easier to satisfy, the question of how the rate at which the algorithm runs into contradictions depends on the input is not easy at all. There is no simple correlations between the contradiction rate and the size of the input.
But the first thing you'll notice if you feed it an image with a lot of patterns, is that it will work very slowly.
Yeah, the corpus thing can be done if we cut out rare patterns and leave only frequent ones. I haven't tried it though.
A very good question! The opposite of it is also important, can we follow some heuristics while creating tilesets to minimize contradiction rates, but not making tilesets easy? I don't know. If someone knows please tell me.
>I assume that if you feed WFC a large input image, it just thinks of that as a very complex set of rules that are harder to satisfy than those of a small input?
Since the input is shredded into a multiset of N by M rectangles, it's the opposite: assuming the small input image is a portion of the large one, the large image model adds examples to the ones in the small image model, so the set of cases it can fit an example to is the same or larger.
You mentioned rotation and mirroring type bitmap ops in your algorithm. But how do we precisely apply a series of dynamic filters to the input to "evolve" the patch? Even using overlapping kernels it seems quite intensive! And prone to artefacts...
You might be interested in this work showing texture synthesis over a 2d surface with non-uniform geometry, rotation, scale, or even velocity: http://hhoppe.com/proj/apptexsyn/
Didn't know where else to contact you, but I made a straight [Java port](https://github.com/Aqwis/JavaWaveFunctionCollapse) just as a fun exercise. Feel free to add it to your README.md if you want.
It would be interesting to apply this concept to wavelets (instead of pixels or voxels) in order to work on real-life pictures.
Also, 3 dimensions as in X, Y and time, to work on animated GIFs. Think about an infinite, never repeating Blue Ball Machine! http://m.imgur.com/5Flh68G
This is crazy, and I think it hints at the possibility of universe creation:
you start from a finite pattern, which becomes the 'rules' of your created universe.
Then, using this wave function collapse algorithm you expand it into an infinity where the possibilities are endless within the constraints of those generator rules
Fantastic stuff. I love it! As with most meachine learning stuff and these very cool ideas (of which I am very hazy on, esp their categorizations) I immediately want to use it for our lab's brain work.
We have a LOT of 3-D images (per voxel is ~200nmx200nmx~600nm for a lot of 1028x1028 .tiff images all stitched together) and would love to feed these images BACKWARDS into this. IE we have the 'far field', and we want the 'elements' that make it up. Say we have a large amount of data on the Pituitary gland, and we want to compare the 'elements' of the Pituitary to the 'elements' of the hippocampus. We know a lot of these differences, but there may be 'something else' in there that we humans are not seeing. This may be of great use to use for disease pathology like Lewy Body Disease precursors.
Sounds like an altogether easier problem, assuming you don't care to compensate for noise in your data somehow - supposing you want to identify all distinct m×n×k-sized "elements", simply use some appropriate rolling hash[1] (i.e. a hash of a window that you can update in constant time as you slide the window) as a key mapping to a list of "elements" you have seen so far with that hash, and only do pointwise comparison (to see if you have seen that exact pattern before) if the hashes match. Assuming you don't have too many distinct elements, this should give you performance close to linear in the size of your data.
Whew, now that was a wikipedia binge. Thanks for the leaping off point! Unfortunately, our data is VERY noisy. We can do some techniques to smooth the data, but the unfortunate part is that the things that matter in biology are under the diffraction limit of light. Inherently, what we want out of the data will then be noisy. Gestalt structures are less noisy and these techniques can help with that (think using a fiber-optic read-out as you go into the brain for surgery so you know what region you are in), at least I think. Also, the 'elements' of our data set are unknown, but likely very large. Maybe in the 100s to 1000s, not 5 or 10. It turns out the brain is complicated, who knew?!
Still, thanks a ton for this info! I think it can really help with some computational bio stuff another lab is working on here (viral similarity in genes in your DNA)
Nicely done! Your 3d extensions could be further extended (performance notwithstanding) to create some pretty cool procedurally generated VR experiences. Thanks for sharing.
Don't really understand the technique, but would like your thoughts on if it's possible to give a Penrose tile set as a seed and see if aperiodic order is generated.
I'm not sure, but I think that Penrose tilesets are what I call "easy": you can't run into a situation where you can't place a new tile. It would be great if someone here could confirm or deny this.
So if this is the case, then Penrose tilesets are not interesting to WFC, because you can produce arbitrary tilings with much simpler algorithms.
Right now though WFC is only working with square tiles, but it's not hard to generalize it to arbitrary shapes. Paul F. Harrison made a tiling program that supports hex tiles: http://logarithmic.net/pfh/ghost-diagrams See also the relevant paragraph in the readme (just search the word "easy").
Penrose tiles are "easy" on an unbounded canvas, but I'm pretty sure they're a 100%-probable "contradiction" (because they're aperiodic) on a bounded toroidal canvas.
I think you're right about Penrose tiles being easy. One could, however, potentially use this with polygonal tiles to find their Heesch numbers and maybe even make some advancements in solving some of the many unsolved problems associated with Heesch tiling.
Maybe it could be interesting to place Penrose tiles with the simple algorithm, but color them with your algorithm just like you're currently coloring squares.
As I understand it: this treats image generation as constraint satisfaction. The constraints are that each NxN patch appears in the source image. The satisfaction method is arc consistency https://en.wikipedia.org/wiki/AC-3_algorithm, except, when that settles down prematurely, pick the least-constrained patch and make a random valid choice, then continue. (If this leads to getting stuck, then give up instead of backtracking.)
Is that the idea? The description wasn't completely clear to me.
Yes, (C1) is a constraint problem. But we also want to satisfy (C2) as close as possible, otherwise we could have just colored some outputs in a single color.
Yes, I took that to mean, when you're making a random valid choice, the weights come from the input distribution.
I don't understand about "a single color" unless you mean setting all output pixels to the same color, which would only satisfy the constraints if the input has an NxN patch all one color.
I hope my comment didn't seem to imply that the work seemed unoriginal. I don't think that; I wanted to check my reading.
Thank you for writing this (and your last comment) out explicitly. The quantum mechanics description confused me (and possibly other from reading the thread here). Although I can understand how that inspired this work in the first place.
The algorithm used is exactly what you described here. It wasn't obvious to me that probability density functions were not tracked (the algorithm only tracks which NxN patch are allowed at each location) and randomness only come into play when a random valid choice is made, and there each valid patch is chosen probability proportional to its number of occurrence in the input.
Most of the examples in the repo have those NxN all one color patches. Or, without (C2) the algorithm would have generated completely empty integrated circuits, or completely grass terrain, which is really boring.
You understood right, it's constraints + probabilities.
I might try this approach to generate formal poetry -- it's something I've done by backtracking before, and I'd considered doing something like your ConvChain.
At first I thought that my methods don't offer anything new to text generation besides the Markov chain, but several people already proposed ideas that sound sensible, so let me know if you make anything!
Another multi-dimensional abstraction for music that's at a bit higher level is a musical lattice. The number of dimensions is limited by the largest prime you accept in the ratios you use. (Typical western music approximated by 12-tone equal temperament uses primes of 2, 3, and 5. Powers of 2 are often ignored because notes an octave apart are perceived to be sort of equivalent for harmonic purposes.)
Spectrogram is 2D (plot of amplitude given time and frequency).
Its interesting to think about it for a spectrogram because "similarity" is different in each dimension (freq vs. time).
Frequency is also perceived logarithmically, so you would probably want to convert to e.g. Mel scale before applying this algorithm (a 2000-2100Hz change is much subtler than a 200-300Hz change).
Isn't that 3 dimensions (amplitude, time, and frequency)? The plot of course fills 2 spatial dimensions and uses color to represent the 3rd dimension. But I don't know very much about this.
I don't have a mathematically rigorous understanding of it but the number of dimensions is basically the number of freely varying inputs to the corresponding functional representation.
In a 2d image, x position and y position are mapped to a color, e.g. I(x,y) = C.
In a spectrogram, freq and time are mapped to a color (amplitude) e.g. S(f,t) = A.
In neither case can you just pick an arbitrary color or amplitude and in general produce a singular x/y or f/t from that.
You can also view digital images as 1-dimensional arrays of bits. (This is roughly how fax machines work.) That doesn't mean they can't also be 2-dimensional images, or representations of 3-dimensional images, or indeed an encoding of a 3D scene directly.
Similarly, you can unpack a linear sequence of sound samples into a two-dimensional plot of frequency and amplitude with a fourier transform.
Or you can represent the music as instructions to performers or synthesizers (ie notation) and you've got as many dimensions as you want.
Music is not sound, it's made of sounds. The fact that it gets mixed down to a single waveform when you consume it, either in the studio or when it hits your ear, isnt particularly relevant to how its made. I suppose the same is true of images though.
Markov chains applied to midi have been able to make "locally similar" stuff since forever. I wonder how this algorithm could be applied to higher-order aspects of music notation.
We still call images "two-dimensional" when they're colored. There is a difference between continuous dimensions like space and time, and discrete dimensions like color channels in an image, or like instrument "tracks" of a song. The latter can have correlations, but they'll be sparse associations, rather than structural formulaic ones.
You're describing audio, not music. Music as data isn't quantified in the same way as audio (see the various forms of musical notation that exist), and in fact music displayed in a sequencer or tracker looks pretty similar to the 2D bitmaps in the article.
Amazing. Make sure to scroll through the whole README to see voxel models. Rendered using http://ephtracy.github.io/index.html which is in itself pretty cool too.
Excellent! I especially like the example third from the top (black/white "rooms"). This would be a great tool to generate maps for roguelikes in a more "organic" fashion. Not only that, but the bitmap source you use for input would provide wildly different results.
Given that and the simplicity of just providing a bitmap as input, this could be adapted well to provide customization to the player as well.
This requires no more up front work than a Markov-based technique, but produces results on par with an L-grammar. It may be generally applicable to the problem domains of both!
PatchMatch is an algorithm to quickly... match similar patches in an image, it is used in a lot of texture synthesis algos. See my answer to fitzwatermellow for the difference between texture synthesis and WFC. So yes, it's related.
Photoshop's implementation of PatchMatch handles constraints perfectly, yes.
Yeah, you a right, I'll upload slower gifs. Right now youtube video has the slowest speed, in fact it has segments with no frame-skipping at all: https://youtu.be/DOQTr2Xmlz0
The blog itself hasn't been updated it in a while as I've been in the midst of some pretty serious rewrites/refactors over the past several months. Since that post I've rewritten the game in TypeScript, rewritten the core engine to use the ECS (Entity Component System) pattern, and right now am actually in the midst of a rewrite of pretty much the whole game. Hence I mentioned the rewrite of the mapgen code, the first version of which was probably written 2 years ago hehe. I had a huge amount of features built up over 2 years (idealogically similar to DF in that regard heh) but realized I needed to pare down the feature set to get an alpha out so people could play. It's also allowing me to address some fundamental flaws that the engine built up over the past year or two. So that's the point where I'm at now.
Feel free to sub to the mailing list if you'd like to know about any updates and when the game comes to fruition you'll be notified ;)
The way I see it, this would be the perfect way to produce semi-altered terrain sprites automatically. Right now,the textures are pseudo-randomly rotated for things like dirt blocks, grass tops, and stone. [1] However, with this you could create a multitude of different textures based off a single source.
Plus, dungeons could be a lot more interesting. It'd be interesting to see what could be done with it...
I don't have much to contribute other than to say this is really amazing, and I want to throw all kinds of things at it and see what happens. Pardon my ignorance of how this works, does this algorithm have anything to do with symmetry breaking?
No, not really. ConvChain though is related to symmetry breaking, the same way as MCMC simulation of the Ising model is https://github.com/mxgmn/ConvChain
Source code is a 1-dimensional array. For 1-dimensional arrays WFC is just a Markov chain. 2 and higher dimensional arrays are much more interesting because they have cycles, and there is no canonical way to generalize Markov chains to higher dimensions.
This will be abstract, but you seem to know your abstract algebra -- is it possible to do this kind of thing with graphs? It should be, right? And we all know code can be constructed with graphs, so… voila, you can generate code, no?
Just to make sure I understand, if I were to use 1D WFC with 1xN tiles, would it be the same as an (N-1)th order Markov chain? Or would it be a 1st-order Markov chain with (N-1) simultaneous outputs?
In overlapping models we store probabilities for NxN blocks of colors/tiles. In non-overlapping models we store probabilities for individual colors/tiles.
The README says it uses real-valued probability amplitudes, not complex-valued wavefunction. Could the simulation be done with complex numbers and if so how would the results differ?
We need to interpret those coefficients somehow. Real coefficients can be interpreted as mixing of colors, but for complex ones I don't see a good interpretation.
Can this approach be used to generate "missing parts" in unfinished song? If composer has few good parts but is too lazy to finish the whole piece, for example? Or to "extend" Moonlight for example?
There are special approaches to generating music. The best for ratio of quality/complexity that I know of are Markov constraints https://www.youtube.com/watch?v=buXqNqBFd6E and WaveNet. I don't think WFC offers something useful and new for generation of music.
I wonder too =). But it'll run like forever on a high res image. For high res image you want to use something like texture synthesis, see my reply to fitzwatermellow for more.
I'm not experienced with the license law, but people told me that it's better to have license text in source files themselves, because I have samples in the repo that I have no idea who has rights for.
It's C#. I'm not familiar with it which is why I was surprised that after downloading Visual Studio Tools, I opened Developer Command Prompt, I did a `csc *.cs` and I was left with one nice executable, Main.exe.
But is your algorithm really qualitatively all that different from previous search methods (e.g. Efros and Leung), if you are still (uniform random?) sampling over the input distribution of patches?
I notice also your input textures tend to have sharp boundaries (as is common in pixel art). It would be interesting to see the results when the input has a lot of "noise", such as high def images of rocks or clouds ;)
While I still prefer search methods because they are easy to implement (and give better results), "deep" methods are definitely gaining some ground:
Deep Textures
http://bethgelab.org/deeptextures/
Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis
https://arxiv.org/abs/1601.04589