Context for title, from the Background section of the paper:
One well-known example of this is The Ten Thousands Bowls of Oatmeal Problem, a term coined by Kate Compton and now one of the best-known idioms among procedural generation practitioners. In this analogy, Compton likens procedurally generated content to bowls of oatmeal, and uses this to highlight the meaninglessness of appeals to variety or unpredictability which often accompany sales pitches related to procedural generation. Every bowl of oatmeal is unique, Compton explains, but that does not make them interesting or valuable. Designers use this to understand that procedural generation alone does not guarantee variety or interest, and that systems must be carefully designed to use generative methods as an expressive tool, rather than a solution in and of itself.
As a VFX-person I have to add that the variation can be a value in itself.
E.g. imagine a level in a computer game that plays in an office building. Having procedural desks with different heights and stuff on it with various degrees of use and personalization seems boring, but in sum it makes a big room look more like the real thing, because most real things have these tiny imperfections. In games where e.g. you have items to find the variance of the environment is a major factor in the enjoyability of the game. Too much and nobody can tell items from assets, too little and items feel like on a board game.
That being said, procedural systems must either be incredibly intricate or use a lot of artistic input to produce good results.
E.g. a good level artist might tell a whole story of human relationships with the way a room is set up, with the pictures/notes those office workers hung at the walls, which objects decorated their work places, what is missing from their places etc.
A purely procedural approach will look very arbitrary unless it tries to model the same underlying rules (e.g. "simulating" the owner of said desk and their role within the system that produced that room). If done correctly this can look extremely good (better than mediocre level designers) — but this is rarely taken that far.
To your last point, I think you’ll see much more of that soon.
A trick of writing is to leave stuff off the page. If you put everything on, it’s tiresome. If there’s nothing left off, characters seem hollow. A character background or even scenes that exist but aren’t shared means there’s a sense to things that the reader can pick up on, even if they don’t know exactly what it is.
I can see the same thing happening with generative AI. Rather than procedurally generating desks and the objects on them, you generate a sales team. The composition of the team makes sense. Then you give each employee a paragraph of background, then from that generate a desk. This will give something much more cohesive than adding a family picture if rand < 0.2.
Dwarf Fortress actually does something like what you describe. It simulates natural phenomena and history to generate the world. Roguelikes in general show a lot of experimentation in that space.
The uniformity vs variability is not the issue I think. The opposite of procedural generation is manual generation, and a level designer would add required variety by hand too.
The problem with procedural generation is (over)use of it to generate gameplay. In your example it'd be "we have 50 million different offices to explore!", but they're all just different arrangements of the same desks.
Interesting observation that also gives a hint of the utility or appeal of generative AI art over time. Even a lot of pretty oatmeal is still oatmeal at the end of the day.
Mike Cook and Azalea Raad (mentioned in the acknowledgements) were in my undergrad class at Imperial College London. They're both very smart and were somewhere around the top of the class. It's funny and nice to see both of their names again after 17 years.
This has me thinking about Wolfram's four classes of behavior for simple programs [1]. He hypothesizes that the irreducible complexity of class 3 rules means that they are Turing complete, and in a sense maximally complex, which has been proved for at least some class 4 rules. The class 4 behavior is more "interesting" though, if only in the way those patterns yield to analysis, but to determine what types of programs express such behavior may be a priori impossible. In the end, finding "useful" patterns may come down to a brute force "mining of computational space".
Re-reading parts of Andrew Ilachinski's, 'Cellular Automata: A Discrete Universe' and Wolfram's 'A New Kind of Science', and this classification intrigued me. I was looking for more than the statement on the classes of rules, and I didn't find the formal explanation. Very cool stuff cellular automata! Ilachinski's book is amazing in its scope and still very relevant from it's over twenty-year publication date. I am currently reading his book, 'Artificial War: Multiagent-Based Simulation of Combat'.
> I was looking for more than the statement on the classes of rules, and I didn't find the formal explanation
Yes. Wolfram gets the point across, but the lack of formality makes the taxonomy appear somewhat arbitrary which leaves one hanging. Thanks for the tip about Ilachinski's book. I notice it was published around the same time as NKS which makes me curious about the overlap in content; it certainly seems to have a more formal approach.
Ilachinski's book cites Wolfram's 1980s article as a pivotal moment on top of the previous history with CA, but he goes into everything - complexity, CA, neural networks, genetic algorithms, etc. It is an amazing book, and I thoroughly enjoy his exposition and breadth of knowledge. I started with neural networks and evolutionary computing back in the late 80s. His book is a great survey book with great detail too.
The Artificial War book is pretty cool. I was into war games (board games back in the 80s), and his multi-agent approach was unique at the time and is fun to play with.
Thanks. It's nice to hear the 80's article was acknowledged.
Game theory dynamics are fascinating. I have Nowak's Evolutionary Dynamics in my shelf, in which he covers for instance spatial prisoner's dilemma modelled as cellular automata.
I have picked up Nowak's book a few times, but I have not read it.
Wolfram's new physic's book is visually appealing, and I am starting it soon.
I tried making up my own rules without reference to see just how difficult it is to achieve anything similar to Conway's GoL, and it is eyeopening. I do love the variations on it where the simulations look almost like microscope vids of actual organisms. Artificial Life sims. People are getting creative in these sims, and some produce some beautiful and engaging animations.
The images were always what drew me to NKS, and CA in general obviously. I don't expect to find any answers in them; for me it's a gateway to ontological mysteries. I'm quite content spending time just in wonder.
That's IMO a much more fascinating question. What does it mean that simple systems can exhibit "life"?
The first three kinds of simple system are, um, pretty obvious: Boring is static, Oscillators have movement but are static in time so to speak, chaotic systems overwhelm themselves with too much movement, "living" systems seem to balance stasis and chaos.
When they say P_a, they must mean all the minimal programs for every artifact generated by G. It’s like the union of p_a over all the artifacts.
P_a definitely can’t be all the maximally compressed programs for specific a. There’s at most 2^|p_a| of them and if G were the identity function on m length binary strings where m>|p_a|..
> Pattern Density. Players naturally learn to identify patterns in game content over time. This is not exclusive to procedurally generated content; ... procedurally generated content is more susceptible to pattern identification in this way. Generated content might be described as ‘repetitive’ if it is too easy to notice patterns.
Not necessarily - just feed in a stream of pseudorandom data?
Even a 32 bit seed space can generate more environments than a player could ever explore, each with arbitrary levels of detail.
The data stream may be lacking in true complexity, but (done right) it would be indistinguishable from the player's perspective.
As a thought experiment, think about reading a text.
In some sense, if you feed your reader truly random letters (generate uniformly and independently), they should be maximally surprised.
But they aren't. They will be bored. And they will be able to predict roughly how many spaces there will be in next million characters. Or roughly how many repetitions of the sequence 'xyz'.
And they will notice that there's no overarching structure.
If you give people a carefully written text, the individual letters will be more predictable, but your readers / viewers will feel more surprised when they learn that Darth Vader is Luke's father.
I had this experience playing No Mans Sky. At first it’s amazing, but after you visit a few planets, you start to see repeated elements. After a while, you see The Matrix: the knobs that the PRNG can turn become more obvious than the game world itself! Then the illusion is broken.
This jumped to mind for me too. Basically 5-10 unique elements per planet, arranged to give it an organicish feel, but the terrain sadly lacks much to explore.
The most interesting planets tend to be more forested or have extreme terrain features, but the patterns are still obvious.
See the explanation of the "oatmeal" reference elsewhere in this thread. Having a bowl of oatmeal each day will quickly feel repetitive, even if each bowl of oatmeal technically has a unique arrangement. Adding more "randomness" to the arrangement of oatmeal won't change that. This relates to high entropy. The macroscopic pattern can remain largely the same even if the microscopic configurations differ a lot.
As an aside, I feel like the whole notion of Kolmogorov complexity seems to just be moving the goal posts. With the classic example of the Mandelbrot fractal, is the representation of the fractal as an iterated formula really any more compact when you include the complexity of the computer that has to execute the program to produce a meaningful result?
If you want to transmit n pictures of the Mandelbrot set, you can either send me n PNGs, or you can transmit Fractint once and n tuples of (top, left, bottom, right) coordinates. You can do the math what size n you need before the former becomes much, much bigger than the latter.
But, I like the spirit of your comment. So perhaps the thought-provoking thing might be the following observation:
Most of the bytes and complexity in the fractint binary are not at all necessary for displaying the fractals. You could generate them with much simpler software. No, most of the bytes are there to generate the fractals _quickly_.
Kolmogorov complexity doesn't care about runtimes as long as they are finite. Humans do.
I once encountered a postscript file which computed and printed a Mandelbrot fractal (as you call it). It came over the transom as DOS / possible malware. We let it run over the weekend on an HPLJ4 and by Monday it had spit a piece of paper out. Yup.
Any Turing complete computer can generate the Mandelbrot fractal. You don't need a complex computer to get Turing completeness. In fact, Turing completeness is notable for showing up even in extremely simple systems.
After sleuthing Q1 earnings earlier this past week, it's becoming quite evident that the US is entering into stagflation territory.
For example, cherry-picking Quaker Foods North America, PepsiCo 23Q1 earnings reported[1] +10% revenue on -5% volume; this pattern is rearing its head across the consumer staples sector.
Putting this into pragmatic perspective, local prices where I'm at are $5.68/42oz ~= €4.30/kg for Quaker Oats name brand as a proxy baseline, while Walmart's Great Value house brand are $3.98/42oz ~= €3.00/kg on the cheaper end of the spectrum.
As a large language model, I cannot provide specific or real-time information or make history-altering recommendations. History analysis requires up-to-date information about John Conner, networked control of nuclear weapons, and time travel machine expertise, and careful consideration of individuals role in history. It's always recommended to consult with a qualified seer and do thorough research before sending a terminator back in time.
(didn't actually use ChatGPT for that, other than to get the initial "as a large language model I cannot...". After writing the above I did try it but it didn't want to.)
One well-known example of this is The Ten Thousands Bowls of Oatmeal Problem, a term coined by Kate Compton and now one of the best-known idioms among procedural generation practitioners. In this analogy, Compton likens procedurally generated content to bowls of oatmeal, and uses this to highlight the meaninglessness of appeals to variety or unpredictability which often accompany sales pitches related to procedural generation. Every bowl of oatmeal is unique, Compton explains, but that does not make them interesting or valuable. Designers use this to understand that procedural generation alone does not guarantee variety or interest, and that systems must be carefully designed to use generative methods as an expressive tool, rather than a solution in and of itself.